00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2386 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3651 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.000 Started by timer 00:00:00.037 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.037 The recommended git tool is: git 00:00:00.037 using credential 00000000-0000-0000-0000-000000000002 00:00:00.043 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.057 Fetching changes from the remote Git repository 00:00:00.059 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.071 Using shallow fetch with depth 1 00:00:00.071 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.071 > git --version # timeout=10 00:00:00.083 > git --version # 'git version 2.39.2' 00:00:00.083 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.094 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.094 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.424 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.435 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.445 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.445 > git config core.sparsecheckout # timeout=10 00:00:03.454 > git read-tree -mu HEAD # timeout=10 00:00:03.468 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.490 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.491 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.591 [Pipeline] Start of Pipeline 00:00:03.602 [Pipeline] library 00:00:03.603 Loading library shm_lib@master 00:00:03.603 Library shm_lib@master is cached. Copying from home. 00:00:03.616 [Pipeline] node 00:00:03.645 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.646 [Pipeline] { 00:00:03.657 [Pipeline] catchError 00:00:03.658 [Pipeline] { 00:00:03.672 [Pipeline] wrap 00:00:03.683 [Pipeline] { 00:00:03.693 [Pipeline] stage 00:00:03.694 [Pipeline] { (Prologue) 00:00:03.709 [Pipeline] echo 00:00:03.710 Node: VM-host-WFP7 00:00:03.716 [Pipeline] cleanWs 00:00:03.725 [WS-CLEANUP] Deleting project workspace... 00:00:03.725 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.733 [WS-CLEANUP] done 00:00:03.903 [Pipeline] setCustomBuildProperty 00:00:03.970 [Pipeline] httpRequest 00:00:04.409 [Pipeline] echo 00:00:04.411 Sorcerer 10.211.164.20 is alive 00:00:04.420 [Pipeline] retry 00:00:04.422 [Pipeline] { 00:00:04.436 [Pipeline] httpRequest 00:00:04.441 HttpMethod: GET 00:00:04.441 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.442 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.443 Response Code: HTTP/1.1 200 OK 00:00:04.444 Success: Status code 200 is in the accepted range: 200,404 00:00:04.444 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.755 [Pipeline] } 00:00:04.773 [Pipeline] // retry 00:00:04.780 [Pipeline] sh 00:00:05.069 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.084 [Pipeline] httpRequest 00:00:05.418 [Pipeline] echo 00:00:05.419 Sorcerer 10.211.164.20 is alive 00:00:05.428 [Pipeline] retry 00:00:05.430 [Pipeline] { 00:00:05.443 [Pipeline] httpRequest 00:00:05.447 HttpMethod: GET 00:00:05.448 URL: http://10.211.164.20/packages/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:05.448 Sending request to url: http://10.211.164.20/packages/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:05.449 Response Code: HTTP/1.1 200 OK 00:00:05.450 Success: Status code 200 is in the accepted range: 200,404 00:00:05.450 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:27.226 [Pipeline] } 00:00:27.245 [Pipeline] // retry 00:00:27.253 [Pipeline] sh 00:00:27.543 + tar --no-same-owner -xf spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:30.098 [Pipeline] sh 00:00:30.384 + git -C spdk log --oneline -n5 00:00:30.384 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:00:30.384 c0b2ac5c9 bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:00:30.384 92fb22519 dif: dif_generate/verify_copy() supports NVMe PRACT = 1 and MD size > PI size 00:00:30.384 79daf868a dif: Add SPDK_DIF_FLAGS_NVME_PRACT for dif_generate/verify_copy() 00:00:30.384 431baf1b5 dif: Insert abstraction into dif_generate/verify_copy() for NVMe PRACT 00:00:30.404 [Pipeline] withCredentials 00:00:30.414 > git --version # timeout=10 00:00:30.426 > git --version # 'git version 2.39.2' 00:00:30.444 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:30.446 [Pipeline] { 00:00:30.455 [Pipeline] retry 00:00:30.457 [Pipeline] { 00:00:30.472 [Pipeline] sh 00:00:30.758 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:31.033 [Pipeline] } 00:00:31.051 [Pipeline] // retry 00:00:31.056 [Pipeline] } 00:00:31.074 [Pipeline] // withCredentials 00:00:31.084 [Pipeline] httpRequest 00:00:31.466 [Pipeline] echo 00:00:31.469 Sorcerer 10.211.164.20 is alive 00:00:31.480 [Pipeline] retry 00:00:31.483 [Pipeline] { 00:00:31.499 [Pipeline] httpRequest 00:00:31.504 HttpMethod: GET 00:00:31.505 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:31.506 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:31.510 Response Code: HTTP/1.1 200 OK 00:00:31.511 Success: Status code 200 is in the accepted range: 200,404 00:00:31.511 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:19.456 [Pipeline] } 00:01:19.467 [Pipeline] // retry 00:01:19.473 [Pipeline] sh 00:01:19.751 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:21.145 [Pipeline] sh 00:01:21.435 + git -C dpdk log --oneline -n5 00:01:21.435 caf0f5d395 version: 22.11.4 00:01:21.435 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:21.435 dc9c799c7d vhost: fix missing spinlock unlock 00:01:21.435 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:21.435 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:21.455 [Pipeline] writeFile 00:01:21.470 [Pipeline] sh 00:01:21.756 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:21.770 [Pipeline] sh 00:01:22.056 + cat autorun-spdk.conf 00:01:22.056 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.056 SPDK_RUN_ASAN=1 00:01:22.056 SPDK_RUN_UBSAN=1 00:01:22.056 SPDK_TEST_RAID=1 00:01:22.056 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:22.056 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:22.056 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:22.064 RUN_NIGHTLY=1 00:01:22.066 [Pipeline] } 00:01:22.080 [Pipeline] // stage 00:01:22.095 [Pipeline] stage 00:01:22.097 [Pipeline] { (Run VM) 00:01:22.110 [Pipeline] sh 00:01:22.395 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:22.395 + echo 'Start stage prepare_nvme.sh' 00:01:22.395 Start stage prepare_nvme.sh 00:01:22.395 + [[ -n 1 ]] 00:01:22.395 + disk_prefix=ex1 00:01:22.395 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:22.395 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:22.395 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:22.395 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.395 ++ SPDK_RUN_ASAN=1 00:01:22.395 ++ SPDK_RUN_UBSAN=1 00:01:22.395 ++ SPDK_TEST_RAID=1 00:01:22.395 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:22.395 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:22.395 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:22.395 ++ RUN_NIGHTLY=1 00:01:22.395 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:22.395 + nvme_files=() 00:01:22.395 + declare -A nvme_files 00:01:22.395 + backend_dir=/var/lib/libvirt/images/backends 00:01:22.395 + nvme_files['nvme.img']=5G 00:01:22.395 + nvme_files['nvme-cmb.img']=5G 00:01:22.395 + nvme_files['nvme-multi0.img']=4G 00:01:22.395 + nvme_files['nvme-multi1.img']=4G 00:01:22.395 + nvme_files['nvme-multi2.img']=4G 00:01:22.395 + nvme_files['nvme-openstack.img']=8G 00:01:22.395 + nvme_files['nvme-zns.img']=5G 00:01:22.395 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:22.395 + (( SPDK_TEST_FTL == 1 )) 00:01:22.395 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:22.395 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:22.395 + for nvme in "${!nvme_files[@]}" 00:01:22.395 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:22.395 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:22.395 + for nvme in "${!nvme_files[@]}" 00:01:22.395 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:22.396 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:22.396 + for nvme in "${!nvme_files[@]}" 00:01:22.396 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:22.396 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:22.396 + for nvme in "${!nvme_files[@]}" 00:01:22.396 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:22.396 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:22.396 + for nvme in "${!nvme_files[@]}" 00:01:22.396 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:22.396 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:22.396 + for nvme in "${!nvme_files[@]}" 00:01:22.396 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:22.396 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:22.396 + for nvme in "${!nvme_files[@]}" 00:01:22.396 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:22.656 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:22.656 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:22.656 + echo 'End stage prepare_nvme.sh' 00:01:22.656 End stage prepare_nvme.sh 00:01:22.669 [Pipeline] sh 00:01:22.956 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:22.956 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:01:22.956 00:01:22.956 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:22.956 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:22.956 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:22.956 HELP=0 00:01:22.956 DRY_RUN=0 00:01:22.956 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:01:22.956 NVME_DISKS_TYPE=nvme,nvme, 00:01:22.956 NVME_AUTO_CREATE=0 00:01:22.956 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:01:22.956 NVME_CMB=,, 00:01:22.956 NVME_PMR=,, 00:01:22.956 NVME_ZNS=,, 00:01:22.956 NVME_MS=,, 00:01:22.956 NVME_FDP=,, 00:01:22.956 SPDK_VAGRANT_DISTRO=fedora39 00:01:22.956 SPDK_VAGRANT_VMCPU=10 00:01:22.956 SPDK_VAGRANT_VMRAM=12288 00:01:22.956 SPDK_VAGRANT_PROVIDER=libvirt 00:01:22.956 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:22.956 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:22.956 SPDK_OPENSTACK_NETWORK=0 00:01:22.956 VAGRANT_PACKAGE_BOX=0 00:01:22.956 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:22.956 FORCE_DISTRO=true 00:01:22.956 VAGRANT_BOX_VERSION= 00:01:22.956 EXTRA_VAGRANTFILES= 00:01:22.956 NIC_MODEL=virtio 00:01:22.956 00:01:22.956 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:22.956 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:24.866 Bringing machine 'default' up with 'libvirt' provider... 00:01:25.436 ==> default: Creating image (snapshot of base box volume). 00:01:25.436 ==> default: Creating domain with the following settings... 00:01:25.436 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732161565_dd74cdbadef6a0b6a31c 00:01:25.436 ==> default: -- Domain type: kvm 00:01:25.436 ==> default: -- Cpus: 10 00:01:25.436 ==> default: -- Feature: acpi 00:01:25.436 ==> default: -- Feature: apic 00:01:25.436 ==> default: -- Feature: pae 00:01:25.437 ==> default: -- Memory: 12288M 00:01:25.437 ==> default: -- Memory Backing: hugepages: 00:01:25.437 ==> default: -- Management MAC: 00:01:25.437 ==> default: -- Loader: 00:01:25.437 ==> default: -- Nvram: 00:01:25.437 ==> default: -- Base box: spdk/fedora39 00:01:25.437 ==> default: -- Storage pool: default 00:01:25.697 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732161565_dd74cdbadef6a0b6a31c.img (20G) 00:01:25.697 ==> default: -- Volume Cache: default 00:01:25.697 ==> default: -- Kernel: 00:01:25.697 ==> default: -- Initrd: 00:01:25.697 ==> default: -- Graphics Type: vnc 00:01:25.697 ==> default: -- Graphics Port: -1 00:01:25.697 ==> default: -- Graphics IP: 127.0.0.1 00:01:25.697 ==> default: -- Graphics Password: Not defined 00:01:25.697 ==> default: -- Video Type: cirrus 00:01:25.697 ==> default: -- Video VRAM: 9216 00:01:25.697 ==> default: -- Sound Type: 00:01:25.697 ==> default: -- Keymap: en-us 00:01:25.697 ==> default: -- TPM Path: 00:01:25.697 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:25.697 ==> default: -- Command line args: 00:01:25.697 ==> default: -> value=-device, 00:01:25.697 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:25.697 ==> default: -> value=-drive, 00:01:25.697 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:01:25.697 ==> default: -> value=-device, 00:01:25.697 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.697 ==> default: -> value=-device, 00:01:25.697 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:25.697 ==> default: -> value=-drive, 00:01:25.697 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:25.697 ==> default: -> value=-device, 00:01:25.697 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.697 ==> default: -> value=-drive, 00:01:25.697 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:25.697 ==> default: -> value=-device, 00:01:25.697 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.697 ==> default: -> value=-drive, 00:01:25.697 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:25.697 ==> default: -> value=-device, 00:01:25.697 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.697 ==> default: Creating shared folders metadata... 00:01:25.697 ==> default: Starting domain. 00:01:27.081 ==> default: Waiting for domain to get an IP address... 00:01:45.186 ==> default: Waiting for SSH to become available... 00:01:45.186 ==> default: Configuring and enabling network interfaces... 00:01:50.590 default: SSH address: 192.168.121.171:22 00:01:50.590 default: SSH username: vagrant 00:01:50.590 default: SSH auth method: private key 00:01:53.134 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:01.264 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:07.838 ==> default: Mounting SSHFS shared folder... 00:02:09.750 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:09.750 ==> default: Checking Mount.. 00:02:11.130 ==> default: Folder Successfully Mounted! 00:02:11.130 ==> default: Running provisioner: file... 00:02:12.513 default: ~/.gitconfig => .gitconfig 00:02:12.774 00:02:12.774 SUCCESS! 00:02:12.774 00:02:12.774 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:12.774 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:12.774 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:12.774 00:02:12.784 [Pipeline] } 00:02:12.799 [Pipeline] // stage 00:02:12.808 [Pipeline] dir 00:02:12.808 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:12.810 [Pipeline] { 00:02:12.821 [Pipeline] catchError 00:02:12.822 [Pipeline] { 00:02:12.833 [Pipeline] sh 00:02:13.112 + vagrant ssh-config --host vagrant 00:02:13.112 + sed -ne /^Host/,$p 00:02:13.112 + tee ssh_conf 00:02:15.654 Host vagrant 00:02:15.654 HostName 192.168.121.171 00:02:15.654 User vagrant 00:02:15.654 Port 22 00:02:15.654 UserKnownHostsFile /dev/null 00:02:15.654 StrictHostKeyChecking no 00:02:15.654 PasswordAuthentication no 00:02:15.654 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:15.654 IdentitiesOnly yes 00:02:15.654 LogLevel FATAL 00:02:15.654 ForwardAgent yes 00:02:15.654 ForwardX11 yes 00:02:15.654 00:02:15.670 [Pipeline] withEnv 00:02:15.673 [Pipeline] { 00:02:15.686 [Pipeline] sh 00:02:15.970 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:15.970 source /etc/os-release 00:02:15.970 [[ -e /image.version ]] && img=$(< /image.version) 00:02:15.970 # Minimal, systemd-like check. 00:02:15.970 if [[ -e /.dockerenv ]]; then 00:02:15.970 # Clear garbage from the node's name: 00:02:15.970 # agt-er_autotest_547-896 -> autotest_547-896 00:02:15.970 # $HOSTNAME is the actual container id 00:02:15.970 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:15.970 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:15.970 # We can assume this is a mount from a host where container is running, 00:02:15.970 # so fetch its hostname to easily identify the target swarm worker. 00:02:15.970 container="$(< /etc/hostname) ($agent)" 00:02:15.970 else 00:02:15.970 # Fallback 00:02:15.970 container=$agent 00:02:15.970 fi 00:02:15.970 fi 00:02:15.970 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:15.970 00:02:16.243 [Pipeline] } 00:02:16.260 [Pipeline] // withEnv 00:02:16.268 [Pipeline] setCustomBuildProperty 00:02:16.283 [Pipeline] stage 00:02:16.285 [Pipeline] { (Tests) 00:02:16.304 [Pipeline] sh 00:02:16.591 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:16.866 [Pipeline] sh 00:02:17.168 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:17.447 [Pipeline] timeout 00:02:17.447 Timeout set to expire in 1 hr 30 min 00:02:17.450 [Pipeline] { 00:02:17.467 [Pipeline] sh 00:02:17.751 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:18.319 HEAD is now at 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:02:18.332 [Pipeline] sh 00:02:18.614 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:18.890 [Pipeline] sh 00:02:19.175 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:19.446 [Pipeline] sh 00:02:19.724 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:19.984 ++ readlink -f spdk_repo 00:02:19.984 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:19.984 + [[ -n /home/vagrant/spdk_repo ]] 00:02:19.984 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:19.984 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:19.984 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:19.984 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:19.984 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:19.984 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:19.984 + cd /home/vagrant/spdk_repo 00:02:19.984 + source /etc/os-release 00:02:19.984 ++ NAME='Fedora Linux' 00:02:19.984 ++ VERSION='39 (Cloud Edition)' 00:02:19.984 ++ ID=fedora 00:02:19.984 ++ VERSION_ID=39 00:02:19.984 ++ VERSION_CODENAME= 00:02:19.984 ++ PLATFORM_ID=platform:f39 00:02:19.984 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:19.984 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:19.984 ++ LOGO=fedora-logo-icon 00:02:19.984 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:19.984 ++ HOME_URL=https://fedoraproject.org/ 00:02:19.984 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:19.984 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:19.984 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:19.984 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:19.984 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:19.984 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:19.984 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:19.984 ++ SUPPORT_END=2024-11-12 00:02:19.984 ++ VARIANT='Cloud Edition' 00:02:19.984 ++ VARIANT_ID=cloud 00:02:19.984 + uname -a 00:02:19.984 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:19.984 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:20.553 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:20.553 Hugepages 00:02:20.553 node hugesize free / total 00:02:20.553 node0 1048576kB 0 / 0 00:02:20.553 node0 2048kB 0 / 0 00:02:20.553 00:02:20.553 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:20.553 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:20.553 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:20.553 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:02:20.553 + rm -f /tmp/spdk-ld-path 00:02:20.553 + source autorun-spdk.conf 00:02:20.553 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.553 ++ SPDK_RUN_ASAN=1 00:02:20.553 ++ SPDK_RUN_UBSAN=1 00:02:20.553 ++ SPDK_TEST_RAID=1 00:02:20.553 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:20.553 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:20.553 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:20.553 ++ RUN_NIGHTLY=1 00:02:20.553 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:20.553 + [[ -n '' ]] 00:02:20.553 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:20.553 + for M in /var/spdk/build-*-manifest.txt 00:02:20.553 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:20.553 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:20.553 + for M in /var/spdk/build-*-manifest.txt 00:02:20.553 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:20.553 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:20.553 + for M in /var/spdk/build-*-manifest.txt 00:02:20.553 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:20.553 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:20.553 ++ uname 00:02:20.553 + [[ Linux == \L\i\n\u\x ]] 00:02:20.553 + sudo dmesg -T 00:02:20.813 + sudo dmesg --clear 00:02:20.813 + dmesg_pid=6162 00:02:20.813 + sudo dmesg -Tw 00:02:20.813 + [[ Fedora Linux == FreeBSD ]] 00:02:20.813 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:20.813 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:20.813 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:20.813 + [[ -x /usr/src/fio-static/fio ]] 00:02:20.813 + export FIO_BIN=/usr/src/fio-static/fio 00:02:20.813 + FIO_BIN=/usr/src/fio-static/fio 00:02:20.813 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:20.813 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:20.813 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:20.813 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:20.813 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:20.813 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:20.813 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:20.813 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:20.813 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:20.813 04:00:20 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:20.813 04:00:20 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:20.813 04:00:20 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.813 04:00:20 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:02:20.813 04:00:20 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:02:20.813 04:00:20 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:02:20.813 04:00:20 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:20.813 04:00:20 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:20.813 04:00:20 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:20.813 04:00:20 -- spdk_repo/autorun-spdk.conf@8 -- $ RUN_NIGHTLY=1 00:02:20.813 04:00:20 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:20.813 04:00:20 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:20.813 04:00:20 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:20.813 04:00:20 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:20.813 04:00:20 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:20.813 04:00:20 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:20.813 04:00:20 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:20.813 04:00:20 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:20.813 04:00:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.813 04:00:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.813 04:00:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.813 04:00:20 -- paths/export.sh@5 -- $ export PATH 00:02:20.813 04:00:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.813 04:00:20 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:20.813 04:00:20 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:20.813 04:00:20 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732161620.XXXXXX 00:02:20.813 04:00:20 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732161620.82xDjq 00:02:20.813 04:00:20 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:20.813 04:00:20 -- common/autobuild_common.sh@499 -- $ '[' -n v22.11.4 ']' 00:02:21.073 04:00:20 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:21.073 04:00:20 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:21.073 04:00:20 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:21.073 04:00:20 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:21.073 04:00:20 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:21.073 04:00:20 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:21.073 04:00:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.073 04:00:20 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:21.073 04:00:20 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:21.073 04:00:20 -- pm/common@17 -- $ local monitor 00:02:21.073 04:00:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.073 04:00:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.073 04:00:20 -- pm/common@25 -- $ sleep 1 00:02:21.073 04:00:20 -- pm/common@21 -- $ date +%s 00:02:21.073 04:00:20 -- pm/common@21 -- $ date +%s 00:02:21.073 04:00:20 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732161620 00:02:21.073 04:00:20 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732161620 00:02:21.073 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732161620_collect-vmstat.pm.log 00:02:21.073 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732161620_collect-cpu-load.pm.log 00:02:22.013 04:00:21 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:22.013 04:00:21 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:22.013 04:00:21 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:22.013 04:00:21 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:22.013 04:00:21 -- spdk/autobuild.sh@16 -- $ date -u 00:02:22.013 Thu Nov 21 04:00:21 AM UTC 2024 00:02:22.013 04:00:21 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:22.013 v25.01-pre-219-g557f022f6 00:02:22.013 04:00:21 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:22.013 04:00:21 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:22.013 04:00:21 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:22.013 04:00:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:22.013 04:00:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.013 ************************************ 00:02:22.013 START TEST asan 00:02:22.013 ************************************ 00:02:22.013 using asan 00:02:22.013 04:00:21 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:22.013 00:02:22.013 real 0m0.001s 00:02:22.013 user 0m0.000s 00:02:22.013 sys 0m0.000s 00:02:22.013 04:00:21 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:22.013 04:00:21 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:22.013 ************************************ 00:02:22.013 END TEST asan 00:02:22.013 ************************************ 00:02:22.013 04:00:21 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:22.013 04:00:21 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:22.013 04:00:21 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:22.013 04:00:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:22.013 04:00:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.013 ************************************ 00:02:22.013 START TEST ubsan 00:02:22.013 ************************************ 00:02:22.013 using ubsan 00:02:22.013 04:00:21 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:22.013 00:02:22.013 real 0m0.000s 00:02:22.013 user 0m0.000s 00:02:22.013 sys 0m0.000s 00:02:22.013 04:00:21 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:22.013 04:00:21 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:22.013 ************************************ 00:02:22.013 END TEST ubsan 00:02:22.013 ************************************ 00:02:22.013 04:00:21 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:22.013 04:00:21 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:22.274 04:00:21 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:22.274 04:00:21 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:22.274 04:00:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:22.274 04:00:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.274 ************************************ 00:02:22.274 START TEST build_native_dpdk 00:02:22.274 ************************************ 00:02:22.274 04:00:21 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:22.274 04:00:21 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:22.274 04:00:21 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:22.274 04:00:21 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:22.274 04:00:21 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:22.274 caf0f5d395 version: 22.11.4 00:02:22.274 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:22.274 dc9c799c7d vhost: fix missing spinlock unlock 00:02:22.274 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:22.274 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 21.11.0 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:22.274 patching file config/rte_config.h 00:02:22.274 Hunk #1 succeeded at 60 (offset 1 line). 00:02:22.274 04:00:22 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 22.11.4 24.07.0 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:22.274 04:00:22 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:22.275 04:00:22 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:22.275 patching file lib/pcapng/rte_pcapng.c 00:02:22.275 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:22.275 04:00:22 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 22.11.4 24.07.0 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:22.275 04:00:22 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:22.275 04:00:22 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:22.275 04:00:22 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:22.275 04:00:22 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:22.275 04:00:22 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:22.275 04:00:22 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:28.844 The Meson build system 00:02:28.844 Version: 1.5.0 00:02:28.844 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:28.844 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:28.844 Build type: native build 00:02:28.844 Program cat found: YES (/usr/bin/cat) 00:02:28.844 Project name: DPDK 00:02:28.844 Project version: 22.11.4 00:02:28.844 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:28.844 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:28.844 Host machine cpu family: x86_64 00:02:28.844 Host machine cpu: x86_64 00:02:28.844 Message: ## Building in Developer Mode ## 00:02:28.844 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:28.844 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:28.844 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:28.844 Program objdump found: YES (/usr/bin/objdump) 00:02:28.844 Program python3 found: YES (/usr/bin/python3) 00:02:28.844 Program cat found: YES (/usr/bin/cat) 00:02:28.844 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:28.844 Checking for size of "void *" : 8 00:02:28.844 Checking for size of "void *" : 8 (cached) 00:02:28.844 Library m found: YES 00:02:28.844 Library numa found: YES 00:02:28.844 Has header "numaif.h" : YES 00:02:28.844 Library fdt found: NO 00:02:28.844 Library execinfo found: NO 00:02:28.844 Has header "execinfo.h" : YES 00:02:28.844 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:28.844 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:28.844 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:28.844 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:28.844 Run-time dependency openssl found: YES 3.1.1 00:02:28.844 Run-time dependency libpcap found: YES 1.10.4 00:02:28.844 Has header "pcap.h" with dependency libpcap: YES 00:02:28.844 Compiler for C supports arguments -Wcast-qual: YES 00:02:28.844 Compiler for C supports arguments -Wdeprecated: YES 00:02:28.844 Compiler for C supports arguments -Wformat: YES 00:02:28.844 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:28.844 Compiler for C supports arguments -Wformat-security: NO 00:02:28.844 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:28.844 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:28.844 Compiler for C supports arguments -Wnested-externs: YES 00:02:28.844 Compiler for C supports arguments -Wold-style-definition: YES 00:02:28.844 Compiler for C supports arguments -Wpointer-arith: YES 00:02:28.844 Compiler for C supports arguments -Wsign-compare: YES 00:02:28.844 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:28.844 Compiler for C supports arguments -Wundef: YES 00:02:28.844 Compiler for C supports arguments -Wwrite-strings: YES 00:02:28.844 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:28.844 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:28.844 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:28.844 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:28.844 Compiler for C supports arguments -mavx512f: YES 00:02:28.844 Checking if "AVX512 checking" compiles: YES 00:02:28.844 Fetching value of define "__SSE4_2__" : 1 00:02:28.844 Fetching value of define "__AES__" : 1 00:02:28.844 Fetching value of define "__AVX__" : 1 00:02:28.844 Fetching value of define "__AVX2__" : 1 00:02:28.844 Fetching value of define "__AVX512BW__" : 1 00:02:28.844 Fetching value of define "__AVX512CD__" : 1 00:02:28.844 Fetching value of define "__AVX512DQ__" : 1 00:02:28.844 Fetching value of define "__AVX512F__" : 1 00:02:28.844 Fetching value of define "__AVX512VL__" : 1 00:02:28.844 Fetching value of define "__PCLMUL__" : 1 00:02:28.844 Fetching value of define "__RDRND__" : 1 00:02:28.844 Fetching value of define "__RDSEED__" : 1 00:02:28.844 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:28.844 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:28.844 Message: lib/kvargs: Defining dependency "kvargs" 00:02:28.844 Message: lib/telemetry: Defining dependency "telemetry" 00:02:28.844 Checking for function "getentropy" : YES 00:02:28.844 Message: lib/eal: Defining dependency "eal" 00:02:28.844 Message: lib/ring: Defining dependency "ring" 00:02:28.844 Message: lib/rcu: Defining dependency "rcu" 00:02:28.844 Message: lib/mempool: Defining dependency "mempool" 00:02:28.844 Message: lib/mbuf: Defining dependency "mbuf" 00:02:28.844 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:28.844 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:28.844 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:28.844 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:28.844 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:28.844 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:28.844 Compiler for C supports arguments -mpclmul: YES 00:02:28.844 Compiler for C supports arguments -maes: YES 00:02:28.844 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:28.844 Compiler for C supports arguments -mavx512bw: YES 00:02:28.844 Compiler for C supports arguments -mavx512dq: YES 00:02:28.844 Compiler for C supports arguments -mavx512vl: YES 00:02:28.844 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:28.844 Compiler for C supports arguments -mavx2: YES 00:02:28.844 Compiler for C supports arguments -mavx: YES 00:02:28.844 Message: lib/net: Defining dependency "net" 00:02:28.844 Message: lib/meter: Defining dependency "meter" 00:02:28.844 Message: lib/ethdev: Defining dependency "ethdev" 00:02:28.844 Message: lib/pci: Defining dependency "pci" 00:02:28.844 Message: lib/cmdline: Defining dependency "cmdline" 00:02:28.844 Message: lib/metrics: Defining dependency "metrics" 00:02:28.844 Message: lib/hash: Defining dependency "hash" 00:02:28.844 Message: lib/timer: Defining dependency "timer" 00:02:28.844 Fetching value of define "__AVX2__" : 1 (cached) 00:02:28.844 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:28.844 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:28.844 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:28.844 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:28.844 Message: lib/acl: Defining dependency "acl" 00:02:28.844 Message: lib/bbdev: Defining dependency "bbdev" 00:02:28.844 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:28.844 Run-time dependency libelf found: YES 0.191 00:02:28.844 Message: lib/bpf: Defining dependency "bpf" 00:02:28.844 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:28.844 Message: lib/compressdev: Defining dependency "compressdev" 00:02:28.844 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:28.844 Message: lib/distributor: Defining dependency "distributor" 00:02:28.844 Message: lib/efd: Defining dependency "efd" 00:02:28.844 Message: lib/eventdev: Defining dependency "eventdev" 00:02:28.844 Message: lib/gpudev: Defining dependency "gpudev" 00:02:28.844 Message: lib/gro: Defining dependency "gro" 00:02:28.844 Message: lib/gso: Defining dependency "gso" 00:02:28.844 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:28.844 Message: lib/jobstats: Defining dependency "jobstats" 00:02:28.844 Message: lib/latencystats: Defining dependency "latencystats" 00:02:28.844 Message: lib/lpm: Defining dependency "lpm" 00:02:28.844 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:28.844 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:28.844 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:28.844 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:28.844 Message: lib/member: Defining dependency "member" 00:02:28.844 Message: lib/pcapng: Defining dependency "pcapng" 00:02:28.844 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:28.844 Message: lib/power: Defining dependency "power" 00:02:28.844 Message: lib/rawdev: Defining dependency "rawdev" 00:02:28.844 Message: lib/regexdev: Defining dependency "regexdev" 00:02:28.844 Message: lib/dmadev: Defining dependency "dmadev" 00:02:28.844 Message: lib/rib: Defining dependency "rib" 00:02:28.844 Message: lib/reorder: Defining dependency "reorder" 00:02:28.844 Message: lib/sched: Defining dependency "sched" 00:02:28.844 Message: lib/security: Defining dependency "security" 00:02:28.844 Message: lib/stack: Defining dependency "stack" 00:02:28.844 Has header "linux/userfaultfd.h" : YES 00:02:28.844 Message: lib/vhost: Defining dependency "vhost" 00:02:28.844 Message: lib/ipsec: Defining dependency "ipsec" 00:02:28.844 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:28.844 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:28.844 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:28.844 Message: lib/fib: Defining dependency "fib" 00:02:28.844 Message: lib/port: Defining dependency "port" 00:02:28.844 Message: lib/pdump: Defining dependency "pdump" 00:02:28.844 Message: lib/table: Defining dependency "table" 00:02:28.844 Message: lib/pipeline: Defining dependency "pipeline" 00:02:28.844 Message: lib/graph: Defining dependency "graph" 00:02:28.845 Message: lib/node: Defining dependency "node" 00:02:28.845 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:28.845 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:28.845 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:28.845 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:28.845 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:28.845 Compiler for C supports arguments -Wno-unused-value: YES 00:02:28.845 Compiler for C supports arguments -Wno-format: YES 00:02:28.845 Compiler for C supports arguments -Wno-format-security: YES 00:02:28.845 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:28.845 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:29.779 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:29.779 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:29.779 Fetching value of define "__AVX2__" : 1 (cached) 00:02:29.779 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:29.779 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:29.779 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:29.779 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:29.779 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:29.779 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:29.779 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:29.779 Configuring doxy-api.conf using configuration 00:02:29.779 Program sphinx-build found: NO 00:02:29.779 Configuring rte_build_config.h using configuration 00:02:29.779 Message: 00:02:29.779 ================= 00:02:29.779 Applications Enabled 00:02:29.779 ================= 00:02:29.779 00:02:29.779 apps: 00:02:29.779 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:29.779 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:29.779 test-security-perf, 00:02:29.779 00:02:29.779 Message: 00:02:29.779 ================= 00:02:29.779 Libraries Enabled 00:02:29.779 ================= 00:02:29.779 00:02:29.779 libs: 00:02:29.779 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:29.779 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:29.779 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:29.779 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:29.779 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:29.779 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:29.779 table, pipeline, graph, node, 00:02:29.779 00:02:29.779 Message: 00:02:29.779 =============== 00:02:29.779 Drivers Enabled 00:02:29.779 =============== 00:02:29.779 00:02:29.779 common: 00:02:29.779 00:02:29.779 bus: 00:02:29.779 pci, vdev, 00:02:29.779 mempool: 00:02:29.779 ring, 00:02:29.779 dma: 00:02:29.779 00:02:29.779 net: 00:02:29.779 i40e, 00:02:29.779 raw: 00:02:29.779 00:02:29.779 crypto: 00:02:29.779 00:02:29.779 compress: 00:02:29.779 00:02:29.779 regex: 00:02:29.779 00:02:29.779 vdpa: 00:02:29.779 00:02:29.779 event: 00:02:29.779 00:02:29.779 baseband: 00:02:29.779 00:02:29.779 gpu: 00:02:29.779 00:02:29.779 00:02:29.779 Message: 00:02:29.779 ================= 00:02:29.779 Content Skipped 00:02:29.779 ================= 00:02:29.779 00:02:29.779 apps: 00:02:29.779 00:02:29.779 libs: 00:02:29.779 kni: explicitly disabled via build config (deprecated lib) 00:02:29.780 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:29.780 00:02:29.780 drivers: 00:02:29.780 common/cpt: not in enabled drivers build config 00:02:29.780 common/dpaax: not in enabled drivers build config 00:02:29.780 common/iavf: not in enabled drivers build config 00:02:29.780 common/idpf: not in enabled drivers build config 00:02:29.780 common/mvep: not in enabled drivers build config 00:02:29.780 common/octeontx: not in enabled drivers build config 00:02:29.780 bus/auxiliary: not in enabled drivers build config 00:02:29.780 bus/dpaa: not in enabled drivers build config 00:02:29.780 bus/fslmc: not in enabled drivers build config 00:02:29.780 bus/ifpga: not in enabled drivers build config 00:02:29.780 bus/vmbus: not in enabled drivers build config 00:02:29.780 common/cnxk: not in enabled drivers build config 00:02:29.780 common/mlx5: not in enabled drivers build config 00:02:29.780 common/qat: not in enabled drivers build config 00:02:29.780 common/sfc_efx: not in enabled drivers build config 00:02:29.780 mempool/bucket: not in enabled drivers build config 00:02:29.780 mempool/cnxk: not in enabled drivers build config 00:02:29.780 mempool/dpaa: not in enabled drivers build config 00:02:29.780 mempool/dpaa2: not in enabled drivers build config 00:02:29.780 mempool/octeontx: not in enabled drivers build config 00:02:29.780 mempool/stack: not in enabled drivers build config 00:02:29.780 dma/cnxk: not in enabled drivers build config 00:02:29.780 dma/dpaa: not in enabled drivers build config 00:02:29.780 dma/dpaa2: not in enabled drivers build config 00:02:29.780 dma/hisilicon: not in enabled drivers build config 00:02:29.780 dma/idxd: not in enabled drivers build config 00:02:29.780 dma/ioat: not in enabled drivers build config 00:02:29.780 dma/skeleton: not in enabled drivers build config 00:02:29.780 net/af_packet: not in enabled drivers build config 00:02:29.780 net/af_xdp: not in enabled drivers build config 00:02:29.780 net/ark: not in enabled drivers build config 00:02:29.780 net/atlantic: not in enabled drivers build config 00:02:29.780 net/avp: not in enabled drivers build config 00:02:29.780 net/axgbe: not in enabled drivers build config 00:02:29.780 net/bnx2x: not in enabled drivers build config 00:02:29.780 net/bnxt: not in enabled drivers build config 00:02:29.780 net/bonding: not in enabled drivers build config 00:02:29.780 net/cnxk: not in enabled drivers build config 00:02:29.780 net/cxgbe: not in enabled drivers build config 00:02:29.780 net/dpaa: not in enabled drivers build config 00:02:29.780 net/dpaa2: not in enabled drivers build config 00:02:29.780 net/e1000: not in enabled drivers build config 00:02:29.780 net/ena: not in enabled drivers build config 00:02:29.780 net/enetc: not in enabled drivers build config 00:02:29.780 net/enetfec: not in enabled drivers build config 00:02:29.780 net/enic: not in enabled drivers build config 00:02:29.780 net/failsafe: not in enabled drivers build config 00:02:29.780 net/fm10k: not in enabled drivers build config 00:02:29.780 net/gve: not in enabled drivers build config 00:02:29.780 net/hinic: not in enabled drivers build config 00:02:29.780 net/hns3: not in enabled drivers build config 00:02:29.780 net/iavf: not in enabled drivers build config 00:02:29.780 net/ice: not in enabled drivers build config 00:02:29.780 net/idpf: not in enabled drivers build config 00:02:29.780 net/igc: not in enabled drivers build config 00:02:29.780 net/ionic: not in enabled drivers build config 00:02:29.780 net/ipn3ke: not in enabled drivers build config 00:02:29.780 net/ixgbe: not in enabled drivers build config 00:02:29.780 net/kni: not in enabled drivers build config 00:02:29.780 net/liquidio: not in enabled drivers build config 00:02:29.780 net/mana: not in enabled drivers build config 00:02:29.780 net/memif: not in enabled drivers build config 00:02:29.780 net/mlx4: not in enabled drivers build config 00:02:29.780 net/mlx5: not in enabled drivers build config 00:02:29.780 net/mvneta: not in enabled drivers build config 00:02:29.780 net/mvpp2: not in enabled drivers build config 00:02:29.780 net/netvsc: not in enabled drivers build config 00:02:29.780 net/nfb: not in enabled drivers build config 00:02:29.780 net/nfp: not in enabled drivers build config 00:02:29.780 net/ngbe: not in enabled drivers build config 00:02:29.780 net/null: not in enabled drivers build config 00:02:29.780 net/octeontx: not in enabled drivers build config 00:02:29.780 net/octeon_ep: not in enabled drivers build config 00:02:29.780 net/pcap: not in enabled drivers build config 00:02:29.780 net/pfe: not in enabled drivers build config 00:02:29.780 net/qede: not in enabled drivers build config 00:02:29.780 net/ring: not in enabled drivers build config 00:02:29.780 net/sfc: not in enabled drivers build config 00:02:29.780 net/softnic: not in enabled drivers build config 00:02:29.780 net/tap: not in enabled drivers build config 00:02:29.780 net/thunderx: not in enabled drivers build config 00:02:29.780 net/txgbe: not in enabled drivers build config 00:02:29.780 net/vdev_netvsc: not in enabled drivers build config 00:02:29.780 net/vhost: not in enabled drivers build config 00:02:29.780 net/virtio: not in enabled drivers build config 00:02:29.780 net/vmxnet3: not in enabled drivers build config 00:02:29.780 raw/cnxk_bphy: not in enabled drivers build config 00:02:29.780 raw/cnxk_gpio: not in enabled drivers build config 00:02:29.780 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:29.780 raw/ifpga: not in enabled drivers build config 00:02:29.780 raw/ntb: not in enabled drivers build config 00:02:29.780 raw/skeleton: not in enabled drivers build config 00:02:29.780 crypto/armv8: not in enabled drivers build config 00:02:29.780 crypto/bcmfs: not in enabled drivers build config 00:02:29.780 crypto/caam_jr: not in enabled drivers build config 00:02:29.780 crypto/ccp: not in enabled drivers build config 00:02:29.780 crypto/cnxk: not in enabled drivers build config 00:02:29.780 crypto/dpaa_sec: not in enabled drivers build config 00:02:29.780 crypto/dpaa2_sec: not in enabled drivers build config 00:02:29.780 crypto/ipsec_mb: not in enabled drivers build config 00:02:29.780 crypto/mlx5: not in enabled drivers build config 00:02:29.780 crypto/mvsam: not in enabled drivers build config 00:02:29.780 crypto/nitrox: not in enabled drivers build config 00:02:29.780 crypto/null: not in enabled drivers build config 00:02:29.780 crypto/octeontx: not in enabled drivers build config 00:02:29.780 crypto/openssl: not in enabled drivers build config 00:02:29.780 crypto/scheduler: not in enabled drivers build config 00:02:29.780 crypto/uadk: not in enabled drivers build config 00:02:29.780 crypto/virtio: not in enabled drivers build config 00:02:29.780 compress/isal: not in enabled drivers build config 00:02:29.780 compress/mlx5: not in enabled drivers build config 00:02:29.780 compress/octeontx: not in enabled drivers build config 00:02:29.780 compress/zlib: not in enabled drivers build config 00:02:29.780 regex/mlx5: not in enabled drivers build config 00:02:29.780 regex/cn9k: not in enabled drivers build config 00:02:29.780 vdpa/ifc: not in enabled drivers build config 00:02:29.780 vdpa/mlx5: not in enabled drivers build config 00:02:29.780 vdpa/sfc: not in enabled drivers build config 00:02:29.780 event/cnxk: not in enabled drivers build config 00:02:29.780 event/dlb2: not in enabled drivers build config 00:02:29.780 event/dpaa: not in enabled drivers build config 00:02:29.780 event/dpaa2: not in enabled drivers build config 00:02:29.780 event/dsw: not in enabled drivers build config 00:02:29.780 event/opdl: not in enabled drivers build config 00:02:29.780 event/skeleton: not in enabled drivers build config 00:02:29.780 event/sw: not in enabled drivers build config 00:02:29.780 event/octeontx: not in enabled drivers build config 00:02:29.780 baseband/acc: not in enabled drivers build config 00:02:29.780 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:29.780 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:29.780 baseband/la12xx: not in enabled drivers build config 00:02:29.780 baseband/null: not in enabled drivers build config 00:02:29.780 baseband/turbo_sw: not in enabled drivers build config 00:02:29.780 gpu/cuda: not in enabled drivers build config 00:02:29.780 00:02:29.780 00:02:29.780 Build targets in project: 311 00:02:29.780 00:02:29.780 DPDK 22.11.4 00:02:29.780 00:02:29.780 User defined options 00:02:29.780 libdir : lib 00:02:29.780 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:29.780 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:29.780 c_link_args : 00:02:29.780 enable_docs : false 00:02:29.780 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:29.780 enable_kmods : false 00:02:29.780 machine : native 00:02:29.780 tests : false 00:02:29.780 00:02:29.780 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:29.780 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:29.780 04:00:29 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:29.780 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:29.780 [1/740] Generating lib/rte_telemetry_def with a custom command 00:02:29.780 [2/740] Generating lib/rte_kvargs_def with a custom command 00:02:29.780 [3/740] Generating lib/rte_kvargs_mingw with a custom command 00:02:29.780 [4/740] Generating lib/rte_telemetry_mingw with a custom command 00:02:29.780 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:29.780 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:29.780 [7/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:29.780 [8/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:29.780 [9/740] Linking static target lib/librte_kvargs.a 00:02:30.039 [10/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:30.039 [11/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:30.039 [12/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:30.039 [13/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:30.039 [14/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:30.039 [15/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:30.039 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:30.039 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:30.039 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:30.039 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:30.039 [20/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:30.039 [21/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.039 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:30.297 [23/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:30.297 [24/740] Linking target lib/librte_kvargs.so.23.0 00:02:30.297 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:30.297 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:30.297 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:30.297 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:30.297 [29/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:30.297 [30/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:30.297 [31/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:30.297 [32/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:30.297 [33/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:30.297 [34/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:30.297 [35/740] Linking static target lib/librte_telemetry.a 00:02:30.297 [36/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:30.556 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:30.556 [38/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:30.556 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:30.556 [40/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:30.556 [41/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:30.556 [42/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:30.556 [43/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:30.815 [44/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:30.815 [45/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:30.815 [46/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.815 [47/740] Linking target lib/librte_telemetry.so.23.0 00:02:30.815 [48/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:30.815 [49/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:30.815 [50/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:30.815 [51/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:30.815 [52/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:30.815 [53/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:30.815 [54/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:30.815 [55/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:30.815 [56/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:30.815 [57/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:30.815 [58/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:30.815 [59/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:30.815 [60/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:30.815 [61/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:31.074 [62/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:31.074 [63/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:31.074 [64/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:31.074 [65/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:31.074 [66/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:31.074 [67/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:31.075 [68/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:31.075 [69/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:31.075 [70/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:31.075 [71/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:31.075 [72/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:31.075 [73/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:31.075 [74/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:31.075 [75/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:31.075 [76/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:31.075 [77/740] Generating lib/rte_eal_def with a custom command 00:02:31.075 [78/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:31.075 [79/740] Generating lib/rte_eal_mingw with a custom command 00:02:31.075 [80/740] Generating lib/rte_ring_def with a custom command 00:02:31.075 [81/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:31.334 [82/740] Generating lib/rte_ring_mingw with a custom command 00:02:31.335 [83/740] Generating lib/rte_rcu_def with a custom command 00:02:31.335 [84/740] Generating lib/rte_rcu_mingw with a custom command 00:02:31.335 [85/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:31.335 [86/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:31.335 [87/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:31.335 [88/740] Linking static target lib/librte_ring.a 00:02:31.335 [89/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:31.335 [90/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:31.335 [91/740] Generating lib/rte_mempool_def with a custom command 00:02:31.335 [92/740] Generating lib/rte_mempool_mingw with a custom command 00:02:31.594 [93/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:31.594 [94/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.594 [95/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:31.594 [96/740] Generating lib/rte_mbuf_def with a custom command 00:02:31.594 [97/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:31.594 [98/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:31.594 [99/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:31.594 [100/740] Generating lib/rte_mbuf_mingw with a custom command 00:02:31.854 [101/740] Linking static target lib/librte_eal.a 00:02:31.854 [102/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:31.854 [103/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:31.854 [104/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:31.854 [105/740] Linking static target lib/librte_rcu.a 00:02:32.114 [106/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:32.114 [107/740] Linking static target lib/librte_mempool.a 00:02:32.114 [108/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:32.114 [109/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:32.114 [110/740] Generating lib/rte_net_def with a custom command 00:02:32.114 [111/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:32.114 [112/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:32.114 [113/740] Generating lib/rte_net_mingw with a custom command 00:02:32.114 [114/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:32.114 [115/740] Generating lib/rte_meter_def with a custom command 00:02:32.114 [116/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.114 [117/740] Generating lib/rte_meter_mingw with a custom command 00:02:32.373 [118/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:32.373 [119/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:32.373 [120/740] Linking static target lib/librte_meter.a 00:02:32.373 [121/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:32.633 [122/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:32.633 [123/740] Linking static target lib/librte_net.a 00:02:32.633 [124/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.633 [125/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:32.633 [126/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:32.633 [127/740] Linking static target lib/librte_mbuf.a 00:02:32.633 [128/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:32.633 [129/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:32.893 [130/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.893 [131/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.893 [132/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:32.893 [133/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:33.152 [134/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:33.152 [135/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.152 [136/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:33.152 [137/740] Generating lib/rte_ethdev_def with a custom command 00:02:33.411 [138/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:33.411 [139/740] Generating lib/rte_ethdev_mingw with a custom command 00:02:33.411 [140/740] Generating lib/rte_pci_def with a custom command 00:02:33.411 [141/740] Generating lib/rte_pci_mingw with a custom command 00:02:33.411 [142/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:33.411 [143/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:33.411 [144/740] Linking static target lib/librte_pci.a 00:02:33.411 [145/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:33.411 [146/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:33.411 [147/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:33.411 [148/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:33.690 [149/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:33.690 [150/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.690 [151/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:33.690 [152/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:33.690 [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:33.690 [154/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:33.690 [155/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:33.690 [156/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:33.690 [157/740] Generating lib/rte_cmdline_def with a custom command 00:02:33.690 [158/740] Generating lib/rte_cmdline_mingw with a custom command 00:02:33.690 [159/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:33.690 [160/740] Generating lib/rte_metrics_def with a custom command 00:02:33.690 [161/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:33.690 [162/740] Generating lib/rte_metrics_mingw with a custom command 00:02:33.690 [163/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:33.952 [164/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:33.952 [165/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:33.952 [166/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:33.952 [167/740] Linking static target lib/librte_cmdline.a 00:02:33.952 [168/740] Generating lib/rte_hash_def with a custom command 00:02:33.952 [169/740] Generating lib/rte_hash_mingw with a custom command 00:02:33.952 [170/740] Generating lib/rte_timer_def with a custom command 00:02:33.952 [171/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:33.952 [172/740] Generating lib/rte_timer_mingw with a custom command 00:02:34.212 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:34.212 [174/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:34.212 [175/740] Linking static target lib/librte_metrics.a 00:02:34.212 [176/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:34.212 [177/740] Linking static target lib/librte_timer.a 00:02:34.473 [178/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:34.473 [179/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.733 [180/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:34.733 [181/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.733 [182/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.733 [183/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:34.733 [184/740] Generating lib/rte_acl_def with a custom command 00:02:34.733 [185/740] Generating lib/rte_acl_mingw with a custom command 00:02:34.994 [186/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:34.994 [187/740] Linking static target lib/librte_ethdev.a 00:02:34.994 [188/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:34.994 [189/740] Generating lib/rte_bbdev_def with a custom command 00:02:34.994 [190/740] Generating lib/rte_bbdev_mingw with a custom command 00:02:34.994 [191/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:34.994 [192/740] Generating lib/rte_bitratestats_def with a custom command 00:02:34.994 [193/740] Generating lib/rte_bitratestats_mingw with a custom command 00:02:35.254 [194/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:35.254 [195/740] Linking static target lib/librte_bitratestats.a 00:02:35.513 [196/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:35.513 [197/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:35.513 [198/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:35.513 [199/740] Linking static target lib/librte_bbdev.a 00:02:35.513 [200/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.773 [201/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:36.033 [202/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:36.033 [203/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:36.033 [204/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.293 [205/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:36.293 [206/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:36.293 [207/740] Linking static target lib/librte_hash.a 00:02:36.293 [208/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:36.553 [209/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:36.553 [210/740] Generating lib/rte_bpf_def with a custom command 00:02:36.553 [211/740] Generating lib/rte_bpf_mingw with a custom command 00:02:36.813 [212/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:36.813 [213/740] Generating lib/rte_cfgfile_def with a custom command 00:02:36.813 [214/740] Generating lib/rte_cfgfile_mingw with a custom command 00:02:36.813 [215/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:37.073 [216/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:37.073 [217/740] Linking static target lib/librte_cfgfile.a 00:02:37.073 [218/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:37.073 [219/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.073 [220/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:37.073 [221/740] Generating lib/rte_compressdev_def with a custom command 00:02:37.073 [222/740] Generating lib/rte_compressdev_mingw with a custom command 00:02:37.073 [223/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:37.073 [224/740] Linking static target lib/librte_bpf.a 00:02:37.334 [225/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.334 [226/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:37.334 [227/740] Generating lib/rte_cryptodev_def with a custom command 00:02:37.334 [228/740] Generating lib/rte_cryptodev_mingw with a custom command 00:02:37.334 [229/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:37.593 [230/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:37.593 [231/740] Linking static target lib/librte_acl.a 00:02:37.593 [232/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.593 [233/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:37.593 [234/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:37.593 [235/740] Linking static target lib/librte_compressdev.a 00:02:37.593 [236/740] Generating lib/rte_distributor_def with a custom command 00:02:37.593 [237/740] Generating lib/rte_distributor_mingw with a custom command 00:02:37.593 [238/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:37.593 [239/740] Generating lib/rte_efd_def with a custom command 00:02:37.593 [240/740] Generating lib/rte_efd_mingw with a custom command 00:02:37.852 [241/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.852 [242/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:37.852 [243/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:38.111 [244/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:38.111 [245/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:38.370 [246/740] Linking static target lib/librte_distributor.a 00:02:38.370 [247/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:38.370 [248/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.370 [249/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.633 [250/740] Linking target lib/librte_eal.so.23.0 00:02:38.633 [251/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.633 [252/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:38.633 [253/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:38.633 [254/740] Linking target lib/librte_ring.so.23.0 00:02:38.633 [255/740] Linking target lib/librte_meter.so.23.0 00:02:38.891 [256/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:38.891 [257/740] Linking target lib/librte_rcu.so.23.0 00:02:38.891 [258/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:38.891 [259/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:38.891 [260/740] Linking target lib/librte_mempool.so.23.0 00:02:38.891 [261/740] Linking target lib/librte_pci.so.23.0 00:02:38.891 [262/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:38.891 [263/740] Linking target lib/librte_timer.so.23.0 00:02:38.891 [264/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:39.150 [265/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:39.150 [266/740] Linking target lib/librte_mbuf.so.23.0 00:02:39.150 [267/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:39.150 [268/740] Linking target lib/librte_acl.so.23.0 00:02:39.150 [269/740] Linking target lib/librte_cfgfile.so.23.0 00:02:39.150 [270/740] Linking static target lib/librte_efd.a 00:02:39.150 [271/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:39.150 [272/740] Generating lib/rte_eventdev_def with a custom command 00:02:39.150 [273/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:39.150 [274/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:39.150 [275/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:39.150 [276/740] Linking target lib/librte_net.so.23.0 00:02:39.150 [277/740] Linking target lib/librte_bbdev.so.23.0 00:02:39.414 [278/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:39.414 [279/740] Linking target lib/librte_compressdev.so.23.0 00:02:39.414 [280/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.414 [281/740] Generating lib/rte_gpudev_def with a custom command 00:02:39.414 [282/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.414 [283/740] Linking target lib/librte_distributor.so.23.0 00:02:39.414 [284/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:39.414 [285/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:39.414 [286/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:39.414 [287/740] Linking target lib/librte_ethdev.so.23.0 00:02:39.414 [288/740] Linking target lib/librte_cmdline.so.23.0 00:02:39.414 [289/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:39.414 [290/740] Linking static target lib/librte_cryptodev.a 00:02:39.414 [291/740] Linking target lib/librte_hash.so.23.0 00:02:39.684 [292/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:39.684 [293/740] Linking target lib/librte_metrics.so.23.0 00:02:39.684 [294/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:39.684 [295/740] Linking target lib/librte_bpf.so.23.0 00:02:39.684 [296/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:39.684 [297/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:39.684 [298/740] Linking target lib/librte_bitratestats.so.23.0 00:02:39.944 [299/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:39.944 [300/740] Linking target lib/librte_efd.so.23.0 00:02:39.944 [301/740] Generating lib/rte_gro_def with a custom command 00:02:39.944 [302/740] Generating lib/rte_gro_mingw with a custom command 00:02:39.944 [303/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:39.944 [304/740] Linking static target lib/librte_gpudev.a 00:02:39.944 [305/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:39.944 [306/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:39.944 [307/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:39.944 [308/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:40.203 [309/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:40.462 [310/740] Generating lib/rte_gso_def with a custom command 00:02:40.462 [311/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:40.462 [312/740] Generating lib/rte_gso_mingw with a custom command 00:02:40.462 [313/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:40.462 [314/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:40.462 [315/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:40.462 [316/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:40.462 [317/740] Linking static target lib/librte_gro.a 00:02:40.462 [318/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:40.462 [319/740] Linking static target lib/librte_eventdev.a 00:02:40.462 [320/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:40.462 [321/740] Linking static target lib/librte_gso.a 00:02:40.722 [322/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.722 [323/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.722 [324/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.722 [325/740] Linking target lib/librte_gro.so.23.0 00:02:40.722 [326/740] Linking target lib/librte_gpudev.so.23.0 00:02:40.722 [327/740] Linking target lib/librte_gso.so.23.0 00:02:40.722 [328/740] Generating lib/rte_ip_frag_def with a custom command 00:02:40.722 [329/740] Generating lib/rte_ip_frag_mingw with a custom command 00:02:40.722 [330/740] Generating lib/rte_jobstats_def with a custom command 00:02:40.722 [331/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:40.722 [332/740] Generating lib/rte_jobstats_mingw with a custom command 00:02:40.982 [333/740] Generating lib/rte_latencystats_def with a custom command 00:02:40.982 [334/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:40.982 [335/740] Generating lib/rte_latencystats_mingw with a custom command 00:02:40.982 [336/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:40.982 [337/740] Linking static target lib/librte_jobstats.a 00:02:40.982 [338/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:40.982 [339/740] Generating lib/rte_lpm_def with a custom command 00:02:40.982 [340/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:40.982 [341/740] Generating lib/rte_lpm_mingw with a custom command 00:02:40.982 [342/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:41.242 [343/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.242 [344/740] Linking target lib/librte_jobstats.so.23.0 00:02:41.242 [345/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:41.242 [346/740] Linking static target lib/librte_ip_frag.a 00:02:41.242 [347/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:41.502 [348/740] Linking static target lib/librte_latencystats.a 00:02:41.502 [349/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:41.502 [350/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:41.502 [351/740] Generating lib/rte_member_def with a custom command 00:02:41.502 [352/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.502 [353/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:41.502 [354/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:41.502 [355/740] Generating lib/rte_member_mingw with a custom command 00:02:41.502 [356/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.502 [357/740] Linking target lib/librte_cryptodev.so.23.0 00:02:41.502 [358/740] Generating lib/rte_pcapng_def with a custom command 00:02:41.502 [359/740] Linking target lib/librte_latencystats.so.23.0 00:02:41.502 [360/740] Generating lib/rte_pcapng_mingw with a custom command 00:02:41.502 [361/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.761 [362/740] Linking target lib/librte_ip_frag.so.23.0 00:02:41.761 [363/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:41.761 [364/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:41.761 [365/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:41.761 [366/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:41.761 [367/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:41.761 [368/740] Linking static target lib/librte_lpm.a 00:02:41.761 [369/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:42.022 [370/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:42.022 [371/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:42.022 [372/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:42.022 [373/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:42.282 [374/740] Generating lib/rte_power_def with a custom command 00:02:42.282 [375/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.282 [376/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:42.282 [377/740] Generating lib/rte_power_mingw with a custom command 00:02:42.282 [378/740] Linking target lib/librte_lpm.so.23.0 00:02:42.282 [379/740] Generating lib/rte_rawdev_def with a custom command 00:02:42.282 [380/740] Generating lib/rte_rawdev_mingw with a custom command 00:02:42.282 [381/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:42.282 [382/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:42.282 [383/740] Linking static target lib/librte_pcapng.a 00:02:42.282 [384/740] Generating lib/rte_regexdev_def with a custom command 00:02:42.282 [385/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.282 [386/740] Generating lib/rte_regexdev_mingw with a custom command 00:02:42.282 [387/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:42.282 [388/740] Linking target lib/librte_eventdev.so.23.0 00:02:42.282 [389/740] Generating lib/rte_dmadev_def with a custom command 00:02:42.282 [390/740] Generating lib/rte_dmadev_mingw with a custom command 00:02:42.553 [391/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:42.553 [392/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:42.553 [393/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:42.553 [394/740] Generating lib/rte_rib_def with a custom command 00:02:42.553 [395/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:42.553 [396/740] Generating lib/rte_rib_mingw with a custom command 00:02:42.553 [397/740] Linking static target lib/librte_rawdev.a 00:02:42.553 [398/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.553 [399/740] Generating lib/rte_reorder_def with a custom command 00:02:42.553 [400/740] Linking target lib/librte_pcapng.so.23.0 00:02:42.553 [401/740] Generating lib/rte_reorder_mingw with a custom command 00:02:42.553 [402/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:42.553 [403/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:42.553 [404/740] Linking static target lib/librte_power.a 00:02:42.553 [405/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:42.553 [406/740] Linking static target lib/librte_dmadev.a 00:02:42.830 [407/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:42.830 [408/740] Linking static target lib/librte_regexdev.a 00:02:42.830 [409/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:42.830 [410/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:42.830 [411/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.830 [412/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:42.830 [413/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:42.830 [414/740] Linking target lib/librte_rawdev.so.23.0 00:02:42.830 [415/740] Generating lib/rte_sched_def with a custom command 00:02:42.830 [416/740] Generating lib/rte_sched_mingw with a custom command 00:02:43.090 [417/740] Generating lib/rte_security_def with a custom command 00:02:43.090 [418/740] Generating lib/rte_security_mingw with a custom command 00:02:43.090 [419/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:43.090 [420/740] Linking static target lib/librte_reorder.a 00:02:43.090 [421/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:43.090 [422/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:43.090 [423/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:43.090 [424/740] Generating lib/rte_stack_def with a custom command 00:02:43.090 [425/740] Linking static target lib/librte_member.a 00:02:43.090 [426/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.090 [427/740] Generating lib/rte_stack_mingw with a custom command 00:02:43.090 [428/740] Linking target lib/librte_dmadev.so.23.0 00:02:43.090 [429/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:43.090 [430/740] Linking static target lib/librte_stack.a 00:02:43.090 [431/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:43.090 [432/740] Linking static target lib/librte_rib.a 00:02:43.090 [433/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.349 [434/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:43.349 [435/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:43.349 [436/740] Linking target lib/librte_reorder.so.23.0 00:02:43.349 [437/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.350 [438/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.350 [439/740] Linking target lib/librte_regexdev.so.23.0 00:02:43.350 [440/740] Linking target lib/librte_stack.so.23.0 00:02:43.350 [441/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.350 [442/740] Linking target lib/librte_member.so.23.0 00:02:43.350 [443/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.350 [444/740] Linking target lib/librte_power.so.23.0 00:02:43.609 [445/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:43.609 [446/740] Linking static target lib/librte_security.a 00:02:43.609 [447/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.609 [448/740] Linking target lib/librte_rib.so.23.0 00:02:43.609 [449/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:43.609 [450/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:43.609 [451/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:43.609 [452/740] Generating lib/rte_vhost_def with a custom command 00:02:43.609 [453/740] Generating lib/rte_vhost_mingw with a custom command 00:02:43.869 [454/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.869 [455/740] Linking target lib/librte_security.so.23.0 00:02:43.869 [456/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:43.869 [457/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:43.869 [458/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:43.869 [459/740] Linking static target lib/librte_sched.a 00:02:44.438 [460/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:44.438 [461/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.438 [462/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:44.438 [463/740] Linking target lib/librte_sched.so.23.0 00:02:44.438 [464/740] Generating lib/rte_ipsec_def with a custom command 00:02:44.438 [465/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:44.438 [466/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:44.438 [467/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:44.438 [468/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:44.438 [469/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:44.698 [470/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:44.698 [471/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:44.698 [472/740] Generating lib/rte_fib_def with a custom command 00:02:44.698 [473/740] Generating lib/rte_fib_mingw with a custom command 00:02:44.698 [474/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:44.958 [475/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:44.958 [476/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:45.217 [477/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:45.217 [478/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:45.217 [479/740] Linking static target lib/librte_ipsec.a 00:02:45.217 [480/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:45.217 [481/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:45.217 [482/740] Linking static target lib/librte_fib.a 00:02:45.476 [483/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:45.476 [484/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:45.476 [485/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.476 [486/740] Linking target lib/librte_ipsec.so.23.0 00:02:45.476 [487/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:45.476 [488/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.736 [489/740] Linking target lib/librte_fib.so.23.0 00:02:45.736 [490/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:45.736 [491/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:46.306 [492/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:46.306 [493/740] Generating lib/rte_port_def with a custom command 00:02:46.306 [494/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:46.306 [495/740] Generating lib/rte_port_mingw with a custom command 00:02:46.306 [496/740] Generating lib/rte_pdump_def with a custom command 00:02:46.306 [497/740] Generating lib/rte_pdump_mingw with a custom command 00:02:46.306 [498/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:46.306 [499/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:46.306 [500/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:46.566 [501/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:46.566 [502/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:46.566 [503/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:46.566 [504/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:46.566 [505/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:46.826 [506/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:46.826 [507/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:47.085 [508/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:47.085 [509/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:47.085 [510/740] Linking static target lib/librte_port.a 00:02:47.085 [511/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:47.085 [512/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:47.085 [513/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:47.085 [514/740] Linking static target lib/librte_pdump.a 00:02:47.345 [515/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.345 [516/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.345 [517/740] Linking target lib/librte_pdump.so.23.0 00:02:47.345 [518/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:47.604 [519/740] Linking target lib/librte_port.so.23.0 00:02:47.604 [520/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:47.604 [521/740] Generating lib/rte_table_def with a custom command 00:02:47.604 [522/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:47.604 [523/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:47.604 [524/740] Generating lib/rte_table_mingw with a custom command 00:02:47.863 [525/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:47.863 [526/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:47.863 [527/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:47.863 [528/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:47.863 [529/740] Generating lib/rte_pipeline_def with a custom command 00:02:47.863 [530/740] Generating lib/rte_pipeline_mingw with a custom command 00:02:47.863 [531/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:47.863 [532/740] Linking static target lib/librte_table.a 00:02:48.122 [533/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:48.384 [534/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:48.384 [535/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.384 [536/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:48.384 [537/740] Linking target lib/librte_table.so.23.0 00:02:48.384 [538/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:48.645 [539/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:48.645 [540/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:48.645 [541/740] Generating lib/rte_graph_def with a custom command 00:02:48.645 [542/740] Generating lib/rte_graph_mingw with a custom command 00:02:48.645 [543/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:48.905 [544/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:48.905 [545/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:48.905 [546/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:48.905 [547/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:48.905 [548/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:48.905 [549/740] Linking static target lib/librte_graph.a 00:02:49.165 [550/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:49.165 [551/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:49.165 [552/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:49.433 [553/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:49.433 [554/740] Generating lib/rte_node_def with a custom command 00:02:49.433 [555/740] Generating lib/rte_node_mingw with a custom command 00:02:49.701 [556/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:49.701 [557/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.701 [558/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:49.701 [559/740] Linking target lib/librte_graph.so.23.0 00:02:49.701 [560/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:49.701 [561/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:49.701 [562/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:49.701 [563/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:49.701 [564/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:49.701 [565/740] Generating drivers/rte_bus_pci_def with a custom command 00:02:49.701 [566/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:49.961 [567/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:49.961 [568/740] Generating drivers/rte_bus_vdev_def with a custom command 00:02:49.961 [569/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:49.961 [570/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:49.961 [571/740] Generating drivers/rte_mempool_ring_def with a custom command 00:02:49.961 [572/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:49.961 [573/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:49.961 [574/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:49.961 [575/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:49.961 [576/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:49.961 [577/740] Linking static target lib/librte_node.a 00:02:49.961 [578/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:50.221 [579/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:50.221 [580/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:50.221 [581/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:50.221 [582/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:50.221 [583/740] Linking static target drivers/librte_bus_vdev.a 00:02:50.221 [584/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.221 [585/740] Linking target lib/librte_node.so.23.0 00:02:50.221 [586/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:50.481 [587/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:50.481 [588/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.481 [589/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:50.481 [590/740] Linking static target drivers/librte_bus_pci.a 00:02:50.481 [591/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:50.481 [592/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:50.481 [593/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:50.741 [594/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:50.741 [595/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:50.741 [596/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:50.741 [597/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:50.741 [598/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:50.741 [599/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.741 [600/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:51.000 [601/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:51.000 [602/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:51.000 [603/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:51.000 [604/740] Linking static target drivers/librte_mempool_ring.a 00:02:51.000 [605/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:51.000 [606/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:51.000 [607/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:51.260 [608/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:51.520 [609/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:51.520 [610/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:51.780 [611/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:52.039 [612/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:52.299 [613/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:52.299 [614/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:52.299 [615/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:52.299 [616/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:52.597 [617/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:52.857 [618/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:52.857 [619/740] Generating drivers/rte_net_i40e_def with a custom command 00:02:52.857 [620/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:52.857 [621/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:53.117 [622/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:53.377 [623/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:53.637 [624/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:53.637 [625/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:53.897 [626/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:53.897 [627/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:53.897 [628/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:53.897 [629/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:53.897 [630/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:54.157 [631/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:54.157 [632/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:54.157 [633/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:54.417 [634/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:54.676 [635/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:54.676 [636/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:54.676 [637/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:54.676 [638/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:54.937 [639/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:54.937 [640/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:54.937 [641/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:54.937 [642/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:54.937 [643/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:54.937 [644/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:54.937 [645/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:54.937 [646/740] Linking static target drivers/librte_net_i40e.a 00:02:55.197 [647/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:55.197 [648/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:55.458 [649/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:55.458 [650/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.458 [651/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:55.458 [652/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:55.718 [653/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:55.718 [654/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:55.718 [655/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:55.718 [656/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:55.718 [657/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:55.718 [658/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:55.978 [659/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:55.978 [660/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:55.978 [661/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:56.238 [662/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:56.238 [663/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:56.238 [664/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:56.238 [665/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:56.497 [666/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:56.757 [667/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:57.017 [668/740] Linking static target lib/librte_vhost.a 00:02:57.017 [669/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:57.017 [670/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:57.017 [671/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:57.017 [672/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:57.278 [673/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:57.278 [674/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:57.278 [675/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:57.538 [676/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:57.538 [677/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:57.538 [678/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:57.538 [679/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:57.798 [680/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:57.798 [681/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:57.798 [682/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.798 [683/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:57.798 [684/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:57.798 [685/740] Linking target lib/librte_vhost.so.23.0 00:02:58.059 [686/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:58.059 [687/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:58.059 [688/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:58.059 [689/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:58.059 [690/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:58.320 [691/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:58.580 [692/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:58.580 [693/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:58.580 [694/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:58.840 [695/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:58.840 [696/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:59.100 [697/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:59.100 [698/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:59.100 [699/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:59.358 [700/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:59.358 [701/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:59.617 [702/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:59.617 [703/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:59.617 [704/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:59.877 [705/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:59.877 [706/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:59.877 [707/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:00.145 [708/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:00.418 [709/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:00.418 [710/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:00.678 [711/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:00.678 [712/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:00.678 [713/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:00.678 [714/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:00.939 [715/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:00.939 [716/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:00.939 [717/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:01.200 [718/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:01.460 [719/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:03.370 [720/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:03.370 [721/740] Linking static target lib/librte_pipeline.a 00:03:03.630 [722/740] Linking target app/dpdk-test-acl 00:03:03.630 [723/740] Linking target app/dpdk-test-compress-perf 00:03:03.630 [724/740] Linking target app/dpdk-test-eventdev 00:03:03.630 [725/740] Linking target app/dpdk-test-bbdev 00:03:03.630 [726/740] Linking target app/dpdk-test-cmdline 00:03:03.630 [727/740] Linking target app/dpdk-test-crypto-perf 00:03:03.630 [728/740] Linking target app/dpdk-proc-info 00:03:03.630 [729/740] Linking target app/dpdk-pdump 00:03:03.894 [730/740] Linking target app/dpdk-dumpcap 00:03:04.155 [731/740] Linking target app/dpdk-test-gpudev 00:03:04.155 [732/740] Linking target app/dpdk-test-fib 00:03:04.155 [733/740] Linking target app/dpdk-test-pipeline 00:03:04.155 [734/740] Linking target app/dpdk-test-flow-perf 00:03:04.155 [735/740] Linking target app/dpdk-test-sad 00:03:04.155 [736/740] Linking target app/dpdk-test-regex 00:03:04.155 [737/740] Linking target app/dpdk-test-security-perf 00:03:04.155 [738/740] Linking target app/dpdk-testpmd 00:03:08.357 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.617 [740/740] Linking target lib/librte_pipeline.so.23.0 00:03:08.617 04:01:08 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:08.617 04:01:08 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:08.617 04:01:08 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:08.617 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:08.617 [0/1] Installing files. 00:03:08.880 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:08.880 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.882 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.883 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:08.884 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:08.884 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.884 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.884 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.884 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.884 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.884 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.884 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.884 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.884 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.884 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.884 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.884 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.884 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.884 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.884 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.144 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.144 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.144 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.144 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:09.145 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:09.145 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:09.145 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.145 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:09.145 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.145 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.145 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.145 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.145 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.145 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.145 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.145 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.145 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.415 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.415 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.415 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.415 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.415 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.415 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.415 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.415 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.415 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.416 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.417 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.417 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.417 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.417 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.417 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.417 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.417 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.417 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.417 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.417 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.417 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.417 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.417 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.417 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.417 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.417 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.417 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.418 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.418 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.418 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.418 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.418 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.418 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.418 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.418 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.418 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.418 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.418 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.418 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.418 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.418 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.418 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.418 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.418 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.423 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.423 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.423 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.423 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.423 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.423 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.423 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.423 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.423 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.423 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.423 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.423 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.423 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.423 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.423 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.423 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.424 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.424 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.424 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.424 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.424 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.424 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.424 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.424 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.424 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.425 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.426 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.426 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.426 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.426 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.426 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.426 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.426 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.426 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.426 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.426 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.426 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.426 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.426 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.426 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.426 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:09.426 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:09.426 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:09.426 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:09.426 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:09.427 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:09.427 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:09.427 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:09.427 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:09.427 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:09.427 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:09.427 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:09.427 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:09.427 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:09.427 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:09.427 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:09.427 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:09.427 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:09.427 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:09.427 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:09.427 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:09.427 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:09.427 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:09.427 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:09.427 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:09.427 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:09.427 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:09.427 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:09.427 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:09.427 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:09.427 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:09.427 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:09.427 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:09.427 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:09.428 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:09.428 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:09.428 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:09.428 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:09.428 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:09.428 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:09.428 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:09.428 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:09.428 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:09.428 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:09.428 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:09.428 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:09.428 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:09.428 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:09.428 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:09.428 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:09.428 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:09.428 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:09.428 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:09.429 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:09.429 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:09.429 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:09.432 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:09.432 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:09.432 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:09.432 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:09.432 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:09.432 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:09.432 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:09.432 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:09.432 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:09.432 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:09.432 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:09.432 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:09.432 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:09.432 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:09.432 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:09.436 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:09.436 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:09.436 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:09.436 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:09.436 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:09.436 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:09.436 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:09.436 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:09.436 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:09.436 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:09.436 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:09.436 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:09.436 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:09.436 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:09.436 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:09.436 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:09.436 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:09.436 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:09.436 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:09.436 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:09.436 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:09.436 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:09.436 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:09.436 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:09.436 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:09.437 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:09.437 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:09.437 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:09.437 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:09.437 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:09.437 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:09.437 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:09.437 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:09.437 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:09.437 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:09.437 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:09.437 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:09.437 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:09.437 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:09.437 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:09.437 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:09.437 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:09.437 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:09.437 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:09.437 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:09.437 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:09.437 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:09.437 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:09.437 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:09.437 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:09.437 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:09.437 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:09.437 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:09.437 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:09.437 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:09.437 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:09.437 04:01:09 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:09.437 ************************************ 00:03:09.437 END TEST build_native_dpdk 00:03:09.437 ************************************ 00:03:09.437 04:01:09 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:09.437 00:03:09.437 real 0m47.259s 00:03:09.437 user 4m29.091s 00:03:09.437 sys 0m55.636s 00:03:09.437 04:01:09 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:09.437 04:01:09 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:09.437 04:01:09 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:09.437 04:01:09 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:09.437 04:01:09 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:09.437 04:01:09 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:09.437 04:01:09 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:09.437 04:01:09 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:09.437 04:01:09 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:09.437 04:01:09 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:09.699 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:09.699 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.699 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:09.699 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:09.958 Using 'verbs' RDMA provider 00:03:26.276 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:41.164 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:41.164 Creating mk/config.mk...done. 00:03:41.164 Creating mk/cc.flags.mk...done. 00:03:41.164 Type 'make' to build. 00:03:41.164 04:01:41 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:41.164 04:01:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:41.164 04:01:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:41.164 04:01:41 -- common/autotest_common.sh@10 -- $ set +x 00:03:41.164 ************************************ 00:03:41.164 START TEST make 00:03:41.164 ************************************ 00:03:41.164 04:01:41 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:41.733 make[1]: Nothing to be done for 'all'. 00:04:28.437 CC lib/ut_mock/mock.o 00:04:28.437 CC lib/log/log.o 00:04:28.437 CC lib/log/log_flags.o 00:04:28.437 CC lib/log/log_deprecated.o 00:04:28.437 CC lib/ut/ut.o 00:04:28.437 LIB libspdk_ut_mock.a 00:04:28.437 SO libspdk_ut_mock.so.6.0 00:04:28.437 LIB libspdk_ut.a 00:04:28.437 LIB libspdk_log.a 00:04:28.437 SYMLINK libspdk_ut_mock.so 00:04:28.437 SO libspdk_ut.so.2.0 00:04:28.437 SO libspdk_log.so.7.1 00:04:28.437 SYMLINK libspdk_ut.so 00:04:28.437 SYMLINK libspdk_log.so 00:04:28.437 CC lib/dma/dma.o 00:04:28.437 CC lib/util/base64.o 00:04:28.437 CC lib/ioat/ioat.o 00:04:28.437 CC lib/util/crc32.o 00:04:28.437 CC lib/util/bit_array.o 00:04:28.437 CC lib/util/cpuset.o 00:04:28.437 CC lib/util/crc32c.o 00:04:28.437 CC lib/util/crc16.o 00:04:28.437 CXX lib/trace_parser/trace.o 00:04:28.437 CC lib/vfio_user/host/vfio_user_pci.o 00:04:28.698 CC lib/util/crc32_ieee.o 00:04:28.698 CC lib/vfio_user/host/vfio_user.o 00:04:28.698 CC lib/util/crc64.o 00:04:28.698 CC lib/util/dif.o 00:04:28.698 LIB libspdk_dma.a 00:04:28.698 CC lib/util/fd.o 00:04:28.698 SO libspdk_dma.so.5.0 00:04:28.698 CC lib/util/fd_group.o 00:04:28.698 CC lib/util/file.o 00:04:28.698 LIB libspdk_ioat.a 00:04:28.698 CC lib/util/hexlify.o 00:04:28.698 SO libspdk_ioat.so.7.0 00:04:28.698 SYMLINK libspdk_dma.so 00:04:28.698 CC lib/util/iov.o 00:04:28.958 CC lib/util/math.o 00:04:28.958 SYMLINK libspdk_ioat.so 00:04:28.958 CC lib/util/net.o 00:04:28.958 CC lib/util/pipe.o 00:04:28.958 LIB libspdk_vfio_user.a 00:04:28.958 SO libspdk_vfio_user.so.5.0 00:04:28.958 CC lib/util/strerror_tls.o 00:04:28.958 SYMLINK libspdk_vfio_user.so 00:04:28.958 CC lib/util/string.o 00:04:28.958 CC lib/util/uuid.o 00:04:28.958 CC lib/util/xor.o 00:04:28.958 CC lib/util/zipf.o 00:04:28.958 CC lib/util/md5.o 00:04:29.526 LIB libspdk_util.a 00:04:29.526 SO libspdk_util.so.10.1 00:04:29.526 LIB libspdk_trace_parser.a 00:04:29.526 SO libspdk_trace_parser.so.6.0 00:04:29.785 SYMLINK libspdk_util.so 00:04:29.785 SYMLINK libspdk_trace_parser.so 00:04:30.044 CC lib/conf/conf.o 00:04:30.044 CC lib/env_dpdk/env.o 00:04:30.044 CC lib/env_dpdk/memory.o 00:04:30.044 CC lib/idxd/idxd.o 00:04:30.044 CC lib/idxd/idxd_user.o 00:04:30.044 CC lib/idxd/idxd_kernel.o 00:04:30.044 CC lib/env_dpdk/pci.o 00:04:30.044 CC lib/rdma_utils/rdma_utils.o 00:04:30.044 CC lib/json/json_parse.o 00:04:30.044 CC lib/vmd/vmd.o 00:04:30.044 CC lib/vmd/led.o 00:04:30.302 LIB libspdk_conf.a 00:04:30.302 CC lib/env_dpdk/init.o 00:04:30.302 CC lib/json/json_util.o 00:04:30.302 SO libspdk_conf.so.6.0 00:04:30.302 LIB libspdk_rdma_utils.a 00:04:30.302 SO libspdk_rdma_utils.so.1.0 00:04:30.302 SYMLINK libspdk_conf.so 00:04:30.302 CC lib/env_dpdk/threads.o 00:04:30.302 CC lib/env_dpdk/pci_ioat.o 00:04:30.302 SYMLINK libspdk_rdma_utils.so 00:04:30.302 CC lib/env_dpdk/pci_virtio.o 00:04:30.302 CC lib/env_dpdk/pci_vmd.o 00:04:30.302 CC lib/env_dpdk/pci_idxd.o 00:04:30.561 CC lib/env_dpdk/pci_event.o 00:04:30.561 CC lib/env_dpdk/sigbus_handler.o 00:04:30.561 CC lib/json/json_write.o 00:04:30.561 CC lib/env_dpdk/pci_dpdk.o 00:04:30.561 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:30.561 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:30.561 CC lib/rdma_provider/common.o 00:04:30.561 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:30.820 LIB libspdk_idxd.a 00:04:30.820 SO libspdk_idxd.so.12.1 00:04:30.820 LIB libspdk_vmd.a 00:04:30.820 SO libspdk_vmd.so.6.0 00:04:30.820 LIB libspdk_json.a 00:04:30.820 SYMLINK libspdk_idxd.so 00:04:30.820 SO libspdk_json.so.6.0 00:04:30.820 LIB libspdk_rdma_provider.a 00:04:30.820 SYMLINK libspdk_vmd.so 00:04:30.820 SYMLINK libspdk_json.so 00:04:30.820 SO libspdk_rdma_provider.so.7.0 00:04:31.079 SYMLINK libspdk_rdma_provider.so 00:04:31.338 CC lib/jsonrpc/jsonrpc_server.o 00:04:31.338 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:31.338 CC lib/jsonrpc/jsonrpc_client.o 00:04:31.338 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:31.598 LIB libspdk_jsonrpc.a 00:04:31.857 SO libspdk_jsonrpc.so.6.0 00:04:31.857 SYMLINK libspdk_jsonrpc.so 00:04:31.857 LIB libspdk_env_dpdk.a 00:04:32.117 SO libspdk_env_dpdk.so.15.1 00:04:32.117 SYMLINK libspdk_env_dpdk.so 00:04:32.117 CC lib/rpc/rpc.o 00:04:32.377 LIB libspdk_rpc.a 00:04:32.637 SO libspdk_rpc.so.6.0 00:04:32.637 SYMLINK libspdk_rpc.so 00:04:32.897 CC lib/keyring/keyring.o 00:04:32.897 CC lib/keyring/keyring_rpc.o 00:04:32.897 CC lib/trace/trace.o 00:04:32.897 CC lib/trace/trace_flags.o 00:04:32.897 CC lib/trace/trace_rpc.o 00:04:32.897 CC lib/notify/notify.o 00:04:32.897 CC lib/notify/notify_rpc.o 00:04:33.158 LIB libspdk_notify.a 00:04:33.158 SO libspdk_notify.so.6.0 00:04:33.158 LIB libspdk_keyring.a 00:04:33.158 SYMLINK libspdk_notify.so 00:04:33.158 SO libspdk_keyring.so.2.0 00:04:33.158 LIB libspdk_trace.a 00:04:33.418 SO libspdk_trace.so.11.0 00:04:33.418 SYMLINK libspdk_keyring.so 00:04:33.418 SYMLINK libspdk_trace.so 00:04:33.988 CC lib/thread/thread.o 00:04:33.988 CC lib/thread/iobuf.o 00:04:33.988 CC lib/sock/sock.o 00:04:33.988 CC lib/sock/sock_rpc.o 00:04:34.248 LIB libspdk_sock.a 00:04:34.248 SO libspdk_sock.so.10.0 00:04:34.508 SYMLINK libspdk_sock.so 00:04:34.768 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:34.768 CC lib/nvme/nvme_ctrlr.o 00:04:34.768 CC lib/nvme/nvme_ns_cmd.o 00:04:34.768 CC lib/nvme/nvme_fabric.o 00:04:34.768 CC lib/nvme/nvme_ns.o 00:04:34.768 CC lib/nvme/nvme_pcie_common.o 00:04:34.768 CC lib/nvme/nvme_pcie.o 00:04:34.768 CC lib/nvme/nvme.o 00:04:34.768 CC lib/nvme/nvme_qpair.o 00:04:35.709 CC lib/nvme/nvme_quirks.o 00:04:35.709 CC lib/nvme/nvme_transport.o 00:04:35.709 CC lib/nvme/nvme_discovery.o 00:04:35.709 LIB libspdk_thread.a 00:04:35.709 SO libspdk_thread.so.11.0 00:04:35.709 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:35.709 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:35.971 SYMLINK libspdk_thread.so 00:04:35.971 CC lib/nvme/nvme_tcp.o 00:04:35.971 CC lib/nvme/nvme_opal.o 00:04:35.971 CC lib/accel/accel.o 00:04:36.230 CC lib/nvme/nvme_io_msg.o 00:04:36.230 CC lib/nvme/nvme_poll_group.o 00:04:36.230 CC lib/accel/accel_rpc.o 00:04:36.489 CC lib/nvme/nvme_zns.o 00:04:36.489 CC lib/blob/blobstore.o 00:04:36.489 CC lib/nvme/nvme_stubs.o 00:04:36.489 CC lib/nvme/nvme_auth.o 00:04:36.489 CC lib/nvme/nvme_cuse.o 00:04:36.748 CC lib/blob/request.o 00:04:36.748 CC lib/blob/zeroes.o 00:04:37.006 CC lib/blob/blob_bs_dev.o 00:04:37.006 CC lib/nvme/nvme_rdma.o 00:04:37.006 CC lib/accel/accel_sw.o 00:04:37.266 CC lib/init/json_config.o 00:04:37.525 CC lib/virtio/virtio.o 00:04:37.525 LIB libspdk_accel.a 00:04:37.525 CC lib/fsdev/fsdev.o 00:04:37.525 CC lib/virtio/virtio_vhost_user.o 00:04:37.525 CC lib/init/subsystem.o 00:04:37.525 SO libspdk_accel.so.16.0 00:04:37.525 CC lib/virtio/virtio_vfio_user.o 00:04:37.525 CC lib/virtio/virtio_pci.o 00:04:37.784 CC lib/init/subsystem_rpc.o 00:04:37.784 SYMLINK libspdk_accel.so 00:04:37.784 CC lib/fsdev/fsdev_io.o 00:04:37.784 CC lib/fsdev/fsdev_rpc.o 00:04:37.784 CC lib/init/rpc.o 00:04:38.044 LIB libspdk_virtio.a 00:04:38.044 LIB libspdk_init.a 00:04:38.044 SO libspdk_virtio.so.7.0 00:04:38.044 CC lib/bdev/bdev.o 00:04:38.044 CC lib/bdev/bdev_rpc.o 00:04:38.044 CC lib/bdev/bdev_zone.o 00:04:38.044 CC lib/bdev/part.o 00:04:38.044 SO libspdk_init.so.6.0 00:04:38.044 SYMLINK libspdk_virtio.so 00:04:38.044 CC lib/bdev/scsi_nvme.o 00:04:38.305 SYMLINK libspdk_init.so 00:04:38.305 LIB libspdk_fsdev.a 00:04:38.305 CC lib/event/app.o 00:04:38.305 CC lib/event/reactor.o 00:04:38.305 CC lib/event/app_rpc.o 00:04:38.305 CC lib/event/log_rpc.o 00:04:38.305 SO libspdk_fsdev.so.2.0 00:04:38.565 CC lib/event/scheduler_static.o 00:04:38.565 SYMLINK libspdk_fsdev.so 00:04:38.825 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:38.825 LIB libspdk_nvme.a 00:04:38.825 SO libspdk_nvme.so.15.0 00:04:39.085 LIB libspdk_event.a 00:04:39.085 SO libspdk_event.so.14.0 00:04:39.085 SYMLINK libspdk_event.so 00:04:39.344 SYMLINK libspdk_nvme.so 00:04:39.344 LIB libspdk_fuse_dispatcher.a 00:04:39.601 SO libspdk_fuse_dispatcher.so.1.0 00:04:39.601 SYMLINK libspdk_fuse_dispatcher.so 00:04:40.170 LIB libspdk_blob.a 00:04:40.170 SO libspdk_blob.so.11.0 00:04:40.429 SYMLINK libspdk_blob.so 00:04:40.689 CC lib/lvol/lvol.o 00:04:40.689 CC lib/blobfs/tree.o 00:04:40.689 CC lib/blobfs/blobfs.o 00:04:41.259 LIB libspdk_bdev.a 00:04:41.259 SO libspdk_bdev.so.17.0 00:04:41.519 SYMLINK libspdk_bdev.so 00:04:41.779 CC lib/nbd/nbd.o 00:04:41.779 CC lib/nbd/nbd_rpc.o 00:04:41.779 CC lib/ublk/ublk_rpc.o 00:04:41.779 CC lib/ublk/ublk.o 00:04:41.779 CC lib/ftl/ftl_core.o 00:04:41.779 CC lib/ftl/ftl_init.o 00:04:41.779 CC lib/scsi/dev.o 00:04:41.779 CC lib/nvmf/ctrlr.o 00:04:41.779 LIB libspdk_blobfs.a 00:04:41.779 SO libspdk_blobfs.so.10.0 00:04:41.779 LIB libspdk_lvol.a 00:04:41.779 SYMLINK libspdk_blobfs.so 00:04:41.779 CC lib/ftl/ftl_layout.o 00:04:42.040 SO libspdk_lvol.so.10.0 00:04:42.040 CC lib/scsi/lun.o 00:04:42.040 CC lib/ftl/ftl_debug.o 00:04:42.040 SYMLINK libspdk_lvol.so 00:04:42.040 CC lib/ftl/ftl_io.o 00:04:42.040 CC lib/nvmf/ctrlr_discovery.o 00:04:42.040 CC lib/scsi/port.o 00:04:42.301 CC lib/nvmf/ctrlr_bdev.o 00:04:42.301 CC lib/nvmf/subsystem.o 00:04:42.301 CC lib/nvmf/nvmf.o 00:04:42.301 LIB libspdk_nbd.a 00:04:42.301 CC lib/scsi/scsi.o 00:04:42.301 SO libspdk_nbd.so.7.0 00:04:42.301 CC lib/scsi/scsi_bdev.o 00:04:42.301 CC lib/ftl/ftl_sb.o 00:04:42.301 SYMLINK libspdk_nbd.so 00:04:42.301 CC lib/ftl/ftl_l2p.o 00:04:42.301 CC lib/ftl/ftl_l2p_flat.o 00:04:42.301 LIB libspdk_ublk.a 00:04:42.561 SO libspdk_ublk.so.3.0 00:04:42.561 CC lib/scsi/scsi_pr.o 00:04:42.561 SYMLINK libspdk_ublk.so 00:04:42.561 CC lib/scsi/scsi_rpc.o 00:04:42.561 CC lib/scsi/task.o 00:04:42.561 CC lib/ftl/ftl_nv_cache.o 00:04:42.561 CC lib/ftl/ftl_band.o 00:04:42.561 CC lib/ftl/ftl_band_ops.o 00:04:42.821 CC lib/ftl/ftl_writer.o 00:04:42.821 CC lib/ftl/ftl_rq.o 00:04:42.821 LIB libspdk_scsi.a 00:04:43.080 CC lib/ftl/ftl_reloc.o 00:04:43.080 SO libspdk_scsi.so.9.0 00:04:43.080 CC lib/nvmf/nvmf_rpc.o 00:04:43.080 CC lib/nvmf/transport.o 00:04:43.080 CC lib/nvmf/tcp.o 00:04:43.080 CC lib/nvmf/stubs.o 00:04:43.080 SYMLINK libspdk_scsi.so 00:04:43.080 CC lib/nvmf/mdns_server.o 00:04:43.339 CC lib/nvmf/rdma.o 00:04:43.598 CC lib/nvmf/auth.o 00:04:43.598 CC lib/ftl/ftl_l2p_cache.o 00:04:43.858 CC lib/iscsi/conn.o 00:04:43.858 CC lib/iscsi/init_grp.o 00:04:43.858 CC lib/iscsi/iscsi.o 00:04:44.116 CC lib/iscsi/param.o 00:04:44.116 CC lib/iscsi/portal_grp.o 00:04:44.116 CC lib/iscsi/tgt_node.o 00:04:44.116 CC lib/vhost/vhost.o 00:04:44.116 CC lib/ftl/ftl_p2l.o 00:04:44.374 CC lib/iscsi/iscsi_subsystem.o 00:04:44.374 CC lib/iscsi/iscsi_rpc.o 00:04:44.374 CC lib/iscsi/task.o 00:04:44.632 CC lib/ftl/ftl_p2l_log.o 00:04:44.632 CC lib/vhost/vhost_rpc.o 00:04:44.632 CC lib/vhost/vhost_scsi.o 00:04:44.632 CC lib/vhost/vhost_blk.o 00:04:44.891 CC lib/vhost/rte_vhost_user.o 00:04:44.891 CC lib/ftl/mngt/ftl_mngt.o 00:04:44.891 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:45.149 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:45.149 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:45.149 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:45.149 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:45.149 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:45.412 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:45.412 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:45.412 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:45.412 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:45.688 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:45.688 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:45.688 LIB libspdk_iscsi.a 00:04:45.688 CC lib/ftl/utils/ftl_conf.o 00:04:45.688 SO libspdk_iscsi.so.8.0 00:04:45.688 CC lib/ftl/utils/ftl_md.o 00:04:45.688 CC lib/ftl/utils/ftl_mempool.o 00:04:45.688 CC lib/ftl/utils/ftl_bitmap.o 00:04:45.948 CC lib/ftl/utils/ftl_property.o 00:04:45.948 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:45.948 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:45.948 LIB libspdk_vhost.a 00:04:45.948 SYMLINK libspdk_iscsi.so 00:04:45.948 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:45.948 SO libspdk_vhost.so.8.0 00:04:45.948 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:45.948 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:45.948 SYMLINK libspdk_vhost.so 00:04:45.948 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:45.948 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:45.948 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:46.207 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:46.207 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:46.207 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:46.207 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:46.207 LIB libspdk_nvmf.a 00:04:46.208 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:46.208 CC lib/ftl/base/ftl_base_dev.o 00:04:46.208 CC lib/ftl/base/ftl_base_bdev.o 00:04:46.208 CC lib/ftl/ftl_trace.o 00:04:46.467 SO libspdk_nvmf.so.20.0 00:04:46.467 LIB libspdk_ftl.a 00:04:46.727 SYMLINK libspdk_nvmf.so 00:04:46.727 SO libspdk_ftl.so.9.0 00:04:46.987 SYMLINK libspdk_ftl.so 00:04:47.557 CC module/env_dpdk/env_dpdk_rpc.o 00:04:47.557 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:47.557 CC module/keyring/file/keyring.o 00:04:47.557 CC module/fsdev/aio/fsdev_aio.o 00:04:47.557 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:47.557 CC module/sock/posix/posix.o 00:04:47.557 CC module/accel/error/accel_error.o 00:04:47.557 CC module/scheduler/gscheduler/gscheduler.o 00:04:47.557 CC module/keyring/linux/keyring.o 00:04:47.557 CC module/blob/bdev/blob_bdev.o 00:04:47.557 LIB libspdk_env_dpdk_rpc.a 00:04:47.557 SO libspdk_env_dpdk_rpc.so.6.0 00:04:47.817 CC module/keyring/linux/keyring_rpc.o 00:04:47.817 CC module/keyring/file/keyring_rpc.o 00:04:47.817 SYMLINK libspdk_env_dpdk_rpc.so 00:04:47.817 LIB libspdk_scheduler_dpdk_governor.a 00:04:47.817 LIB libspdk_scheduler_gscheduler.a 00:04:47.817 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:47.817 SO libspdk_scheduler_gscheduler.so.4.0 00:04:47.817 LIB libspdk_scheduler_dynamic.a 00:04:47.817 CC module/accel/error/accel_error_rpc.o 00:04:47.817 SO libspdk_scheduler_dynamic.so.4.0 00:04:47.817 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:47.817 LIB libspdk_keyring_linux.a 00:04:47.817 SYMLINK libspdk_scheduler_gscheduler.so 00:04:47.817 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:47.817 SYMLINK libspdk_scheduler_dynamic.so 00:04:47.817 SO libspdk_keyring_linux.so.1.0 00:04:47.817 LIB libspdk_keyring_file.a 00:04:47.817 CC module/accel/ioat/accel_ioat.o 00:04:47.817 CC module/fsdev/aio/linux_aio_mgr.o 00:04:47.817 SO libspdk_keyring_file.so.2.0 00:04:47.817 LIB libspdk_blob_bdev.a 00:04:48.077 SYMLINK libspdk_keyring_linux.so 00:04:48.077 SO libspdk_blob_bdev.so.11.0 00:04:48.077 SYMLINK libspdk_keyring_file.so 00:04:48.077 LIB libspdk_accel_error.a 00:04:48.077 CC module/accel/ioat/accel_ioat_rpc.o 00:04:48.077 SYMLINK libspdk_blob_bdev.so 00:04:48.077 CC module/accel/dsa/accel_dsa.o 00:04:48.077 SO libspdk_accel_error.so.2.0 00:04:48.077 CC module/accel/dsa/accel_dsa_rpc.o 00:04:48.077 SYMLINK libspdk_accel_error.so 00:04:48.077 LIB libspdk_accel_ioat.a 00:04:48.077 CC module/accel/iaa/accel_iaa.o 00:04:48.077 SO libspdk_accel_ioat.so.6.0 00:04:48.337 CC module/accel/iaa/accel_iaa_rpc.o 00:04:48.337 SYMLINK libspdk_accel_ioat.so 00:04:48.337 CC module/bdev/delay/vbdev_delay.o 00:04:48.337 CC module/bdev/error/vbdev_error.o 00:04:48.337 LIB libspdk_fsdev_aio.a 00:04:48.337 LIB libspdk_accel_dsa.a 00:04:48.337 CC module/bdev/gpt/gpt.o 00:04:48.337 SO libspdk_fsdev_aio.so.1.0 00:04:48.337 SO libspdk_accel_dsa.so.5.0 00:04:48.337 CC module/blobfs/bdev/blobfs_bdev.o 00:04:48.337 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:48.337 LIB libspdk_accel_iaa.a 00:04:48.337 CC module/bdev/lvol/vbdev_lvol.o 00:04:48.337 SO libspdk_accel_iaa.so.3.0 00:04:48.597 SYMLINK libspdk_fsdev_aio.so 00:04:48.597 SYMLINK libspdk_accel_dsa.so 00:04:48.597 CC module/bdev/gpt/vbdev_gpt.o 00:04:48.597 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:48.597 LIB libspdk_sock_posix.a 00:04:48.597 SYMLINK libspdk_accel_iaa.so 00:04:48.597 CC module/bdev/error/vbdev_error_rpc.o 00:04:48.597 SO libspdk_sock_posix.so.6.0 00:04:48.597 LIB libspdk_blobfs_bdev.a 00:04:48.597 SYMLINK libspdk_sock_posix.so 00:04:48.597 SO libspdk_blobfs_bdev.so.6.0 00:04:48.597 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:48.597 LIB libspdk_bdev_error.a 00:04:48.856 SYMLINK libspdk_blobfs_bdev.so 00:04:48.856 SO libspdk_bdev_error.so.6.0 00:04:48.856 CC module/bdev/malloc/bdev_malloc.o 00:04:48.856 LIB libspdk_bdev_delay.a 00:04:48.856 CC module/bdev/null/bdev_null.o 00:04:48.856 CC module/bdev/nvme/bdev_nvme.o 00:04:48.856 SO libspdk_bdev_delay.so.6.0 00:04:48.856 LIB libspdk_bdev_gpt.a 00:04:48.856 CC module/bdev/passthru/vbdev_passthru.o 00:04:48.856 SYMLINK libspdk_bdev_error.so 00:04:48.856 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:48.856 SO libspdk_bdev_gpt.so.6.0 00:04:48.856 SYMLINK libspdk_bdev_delay.so 00:04:48.856 CC module/bdev/raid/bdev_raid.o 00:04:48.856 SYMLINK libspdk_bdev_gpt.so 00:04:49.115 CC module/bdev/split/vbdev_split.o 00:04:49.115 CC module/bdev/null/bdev_null_rpc.o 00:04:49.115 LIB libspdk_bdev_lvol.a 00:04:49.115 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:49.115 SO libspdk_bdev_lvol.so.6.0 00:04:49.115 CC module/bdev/aio/bdev_aio.o 00:04:49.115 LIB libspdk_bdev_passthru.a 00:04:49.115 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:49.115 SO libspdk_bdev_passthru.so.6.0 00:04:49.115 CC module/bdev/ftl/bdev_ftl.o 00:04:49.115 SYMLINK libspdk_bdev_lvol.so 00:04:49.115 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:49.115 SYMLINK libspdk_bdev_passthru.so 00:04:49.115 LIB libspdk_bdev_null.a 00:04:49.115 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:49.383 SO libspdk_bdev_null.so.6.0 00:04:49.383 CC module/bdev/split/vbdev_split_rpc.o 00:04:49.383 LIB libspdk_bdev_malloc.a 00:04:49.383 SYMLINK libspdk_bdev_null.so 00:04:49.383 SO libspdk_bdev_malloc.so.6.0 00:04:49.383 LIB libspdk_bdev_zone_block.a 00:04:49.383 CC module/bdev/raid/bdev_raid_rpc.o 00:04:49.383 SO libspdk_bdev_zone_block.so.6.0 00:04:49.383 SYMLINK libspdk_bdev_malloc.so 00:04:49.383 CC module/bdev/aio/bdev_aio_rpc.o 00:04:49.383 LIB libspdk_bdev_split.a 00:04:49.383 LIB libspdk_bdev_ftl.a 00:04:49.383 CC module/bdev/raid/bdev_raid_sb.o 00:04:49.383 SO libspdk_bdev_split.so.6.0 00:04:49.383 SYMLINK libspdk_bdev_zone_block.so 00:04:49.643 CC module/bdev/raid/raid0.o 00:04:49.643 SO libspdk_bdev_ftl.so.6.0 00:04:49.643 CC module/bdev/iscsi/bdev_iscsi.o 00:04:49.643 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:49.643 SYMLINK libspdk_bdev_split.so 00:04:49.643 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:49.643 SYMLINK libspdk_bdev_ftl.so 00:04:49.643 LIB libspdk_bdev_aio.a 00:04:49.643 CC module/bdev/nvme/nvme_rpc.o 00:04:49.643 CC module/bdev/nvme/bdev_mdns_client.o 00:04:49.643 SO libspdk_bdev_aio.so.6.0 00:04:49.643 SYMLINK libspdk_bdev_aio.so 00:04:49.643 CC module/bdev/nvme/vbdev_opal.o 00:04:49.643 CC module/bdev/raid/raid1.o 00:04:49.902 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:49.902 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:49.902 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:49.902 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:49.902 CC module/bdev/raid/concat.o 00:04:50.160 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:50.160 CC module/bdev/raid/raid5f.o 00:04:50.160 LIB libspdk_bdev_iscsi.a 00:04:50.160 SO libspdk_bdev_iscsi.so.6.0 00:04:50.160 SYMLINK libspdk_bdev_iscsi.so 00:04:50.160 LIB libspdk_bdev_virtio.a 00:04:50.160 SO libspdk_bdev_virtio.so.6.0 00:04:50.418 SYMLINK libspdk_bdev_virtio.so 00:04:50.676 LIB libspdk_bdev_raid.a 00:04:50.676 SO libspdk_bdev_raid.so.6.0 00:04:50.934 SYMLINK libspdk_bdev_raid.so 00:04:51.930 LIB libspdk_bdev_nvme.a 00:04:51.930 SO libspdk_bdev_nvme.so.7.1 00:04:51.930 SYMLINK libspdk_bdev_nvme.so 00:04:52.497 CC module/event/subsystems/iobuf/iobuf.o 00:04:52.497 CC module/event/subsystems/keyring/keyring.o 00:04:52.497 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:52.497 CC module/event/subsystems/scheduler/scheduler.o 00:04:52.497 CC module/event/subsystems/vmd/vmd.o 00:04:52.497 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:52.497 CC module/event/subsystems/sock/sock.o 00:04:52.497 CC module/event/subsystems/fsdev/fsdev.o 00:04:52.497 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:52.756 LIB libspdk_event_scheduler.a 00:04:52.756 LIB libspdk_event_keyring.a 00:04:52.756 LIB libspdk_event_vmd.a 00:04:52.756 LIB libspdk_event_fsdev.a 00:04:52.756 LIB libspdk_event_sock.a 00:04:52.756 LIB libspdk_event_vhost_blk.a 00:04:52.756 LIB libspdk_event_iobuf.a 00:04:52.756 SO libspdk_event_scheduler.so.4.0 00:04:52.756 SO libspdk_event_keyring.so.1.0 00:04:52.756 SO libspdk_event_fsdev.so.1.0 00:04:52.756 SO libspdk_event_sock.so.5.0 00:04:52.756 SO libspdk_event_vmd.so.6.0 00:04:52.756 SO libspdk_event_vhost_blk.so.3.0 00:04:52.756 SO libspdk_event_iobuf.so.3.0 00:04:52.756 SYMLINK libspdk_event_scheduler.so 00:04:52.756 SYMLINK libspdk_event_keyring.so 00:04:52.756 SYMLINK libspdk_event_vhost_blk.so 00:04:52.756 SYMLINK libspdk_event_fsdev.so 00:04:52.756 SYMLINK libspdk_event_vmd.so 00:04:52.756 SYMLINK libspdk_event_sock.so 00:04:52.756 SYMLINK libspdk_event_iobuf.so 00:04:53.325 CC module/event/subsystems/accel/accel.o 00:04:53.325 LIB libspdk_event_accel.a 00:04:53.325 SO libspdk_event_accel.so.6.0 00:04:53.584 SYMLINK libspdk_event_accel.so 00:04:53.844 CC module/event/subsystems/bdev/bdev.o 00:04:54.103 LIB libspdk_event_bdev.a 00:04:54.103 SO libspdk_event_bdev.so.6.0 00:04:54.103 SYMLINK libspdk_event_bdev.so 00:04:54.362 CC module/event/subsystems/scsi/scsi.o 00:04:54.362 CC module/event/subsystems/ublk/ublk.o 00:04:54.362 CC module/event/subsystems/nbd/nbd.o 00:04:54.362 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:54.362 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:54.621 LIB libspdk_event_ublk.a 00:04:54.621 LIB libspdk_event_scsi.a 00:04:54.621 SO libspdk_event_ublk.so.3.0 00:04:54.621 SO libspdk_event_scsi.so.6.0 00:04:54.621 LIB libspdk_event_nbd.a 00:04:54.621 SO libspdk_event_nbd.so.6.0 00:04:54.621 SYMLINK libspdk_event_scsi.so 00:04:54.621 SYMLINK libspdk_event_ublk.so 00:04:54.621 SYMLINK libspdk_event_nbd.so 00:04:54.621 LIB libspdk_event_nvmf.a 00:04:54.621 SO libspdk_event_nvmf.so.6.0 00:04:54.880 SYMLINK libspdk_event_nvmf.so 00:04:54.880 CC module/event/subsystems/iscsi/iscsi.o 00:04:54.880 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:55.140 LIB libspdk_event_vhost_scsi.a 00:04:55.140 LIB libspdk_event_iscsi.a 00:04:55.140 SO libspdk_event_vhost_scsi.so.3.0 00:04:55.140 SO libspdk_event_iscsi.so.6.0 00:04:55.140 SYMLINK libspdk_event_vhost_scsi.so 00:04:55.140 SYMLINK libspdk_event_iscsi.so 00:04:55.400 SO libspdk.so.6.0 00:04:55.401 SYMLINK libspdk.so 00:04:55.661 CC test/rpc_client/rpc_client_test.o 00:04:55.661 CXX app/trace/trace.o 00:04:55.661 TEST_HEADER include/spdk/accel.h 00:04:55.661 TEST_HEADER include/spdk/accel_module.h 00:04:55.661 TEST_HEADER include/spdk/assert.h 00:04:55.661 TEST_HEADER include/spdk/barrier.h 00:04:55.661 TEST_HEADER include/spdk/base64.h 00:04:55.661 TEST_HEADER include/spdk/bdev.h 00:04:55.661 TEST_HEADER include/spdk/bdev_module.h 00:04:55.661 TEST_HEADER include/spdk/bdev_zone.h 00:04:55.661 TEST_HEADER include/spdk/bit_array.h 00:04:55.661 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:55.662 TEST_HEADER include/spdk/bit_pool.h 00:04:55.662 TEST_HEADER include/spdk/blob_bdev.h 00:04:55.662 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:55.662 TEST_HEADER include/spdk/blobfs.h 00:04:55.921 TEST_HEADER include/spdk/blob.h 00:04:55.921 TEST_HEADER include/spdk/conf.h 00:04:55.921 TEST_HEADER include/spdk/config.h 00:04:55.921 TEST_HEADER include/spdk/cpuset.h 00:04:55.921 TEST_HEADER include/spdk/crc16.h 00:04:55.921 TEST_HEADER include/spdk/crc32.h 00:04:55.921 TEST_HEADER include/spdk/crc64.h 00:04:55.921 TEST_HEADER include/spdk/dif.h 00:04:55.921 TEST_HEADER include/spdk/dma.h 00:04:55.922 TEST_HEADER include/spdk/endian.h 00:04:55.922 TEST_HEADER include/spdk/env_dpdk.h 00:04:55.922 TEST_HEADER include/spdk/env.h 00:04:55.922 TEST_HEADER include/spdk/event.h 00:04:55.922 TEST_HEADER include/spdk/fd_group.h 00:04:55.922 TEST_HEADER include/spdk/fd.h 00:04:55.922 TEST_HEADER include/spdk/file.h 00:04:55.922 TEST_HEADER include/spdk/fsdev.h 00:04:55.922 TEST_HEADER include/spdk/fsdev_module.h 00:04:55.922 TEST_HEADER include/spdk/ftl.h 00:04:55.922 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:55.922 CC examples/ioat/perf/perf.o 00:04:55.922 CC test/thread/poller_perf/poller_perf.o 00:04:55.922 TEST_HEADER include/spdk/gpt_spec.h 00:04:55.922 CC examples/util/zipf/zipf.o 00:04:55.922 TEST_HEADER include/spdk/hexlify.h 00:04:55.922 TEST_HEADER include/spdk/histogram_data.h 00:04:55.922 TEST_HEADER include/spdk/idxd.h 00:04:55.922 TEST_HEADER include/spdk/idxd_spec.h 00:04:55.922 TEST_HEADER include/spdk/init.h 00:04:55.922 TEST_HEADER include/spdk/ioat.h 00:04:55.922 TEST_HEADER include/spdk/ioat_spec.h 00:04:55.922 TEST_HEADER include/spdk/iscsi_spec.h 00:04:55.922 TEST_HEADER include/spdk/json.h 00:04:55.922 TEST_HEADER include/spdk/jsonrpc.h 00:04:55.922 TEST_HEADER include/spdk/keyring.h 00:04:55.922 TEST_HEADER include/spdk/keyring_module.h 00:04:55.922 TEST_HEADER include/spdk/likely.h 00:04:55.922 TEST_HEADER include/spdk/log.h 00:04:55.922 TEST_HEADER include/spdk/lvol.h 00:04:55.922 TEST_HEADER include/spdk/md5.h 00:04:55.922 TEST_HEADER include/spdk/memory.h 00:04:55.922 TEST_HEADER include/spdk/mmio.h 00:04:55.922 TEST_HEADER include/spdk/nbd.h 00:04:55.922 TEST_HEADER include/spdk/net.h 00:04:55.922 CC test/app/bdev_svc/bdev_svc.o 00:04:55.922 TEST_HEADER include/spdk/notify.h 00:04:55.922 CC test/dma/test_dma/test_dma.o 00:04:55.922 TEST_HEADER include/spdk/nvme.h 00:04:55.922 TEST_HEADER include/spdk/nvme_intel.h 00:04:55.922 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:55.922 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:55.922 TEST_HEADER include/spdk/nvme_spec.h 00:04:55.922 TEST_HEADER include/spdk/nvme_zns.h 00:04:55.922 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:55.922 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:55.922 CC test/env/mem_callbacks/mem_callbacks.o 00:04:55.922 TEST_HEADER include/spdk/nvmf.h 00:04:55.922 TEST_HEADER include/spdk/nvmf_spec.h 00:04:55.922 TEST_HEADER include/spdk/nvmf_transport.h 00:04:55.922 TEST_HEADER include/spdk/opal.h 00:04:55.922 TEST_HEADER include/spdk/opal_spec.h 00:04:55.922 TEST_HEADER include/spdk/pci_ids.h 00:04:55.922 TEST_HEADER include/spdk/pipe.h 00:04:55.922 TEST_HEADER include/spdk/queue.h 00:04:55.922 TEST_HEADER include/spdk/reduce.h 00:04:55.922 TEST_HEADER include/spdk/rpc.h 00:04:55.922 TEST_HEADER include/spdk/scheduler.h 00:04:55.922 TEST_HEADER include/spdk/scsi.h 00:04:55.922 TEST_HEADER include/spdk/scsi_spec.h 00:04:55.922 TEST_HEADER include/spdk/sock.h 00:04:55.922 TEST_HEADER include/spdk/stdinc.h 00:04:55.922 TEST_HEADER include/spdk/string.h 00:04:55.922 TEST_HEADER include/spdk/thread.h 00:04:55.922 TEST_HEADER include/spdk/trace.h 00:04:55.922 TEST_HEADER include/spdk/trace_parser.h 00:04:55.922 TEST_HEADER include/spdk/tree.h 00:04:55.922 TEST_HEADER include/spdk/ublk.h 00:04:55.922 TEST_HEADER include/spdk/util.h 00:04:55.922 TEST_HEADER include/spdk/uuid.h 00:04:55.922 TEST_HEADER include/spdk/version.h 00:04:55.922 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:55.922 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:55.922 TEST_HEADER include/spdk/vhost.h 00:04:55.922 TEST_HEADER include/spdk/vmd.h 00:04:55.922 LINK rpc_client_test 00:04:55.922 TEST_HEADER include/spdk/xor.h 00:04:55.922 TEST_HEADER include/spdk/zipf.h 00:04:55.922 CXX test/cpp_headers/accel.o 00:04:55.922 LINK interrupt_tgt 00:04:55.922 LINK poller_perf 00:04:55.922 LINK zipf 00:04:56.184 LINK bdev_svc 00:04:56.184 LINK mem_callbacks 00:04:56.184 LINK ioat_perf 00:04:56.184 CXX test/cpp_headers/accel_module.o 00:04:56.184 LINK spdk_trace 00:04:56.184 CXX test/cpp_headers/assert.o 00:04:56.184 CXX test/cpp_headers/barrier.o 00:04:56.184 CXX test/cpp_headers/base64.o 00:04:56.184 CC examples/ioat/verify/verify.o 00:04:56.184 CXX test/cpp_headers/bdev.o 00:04:56.184 CC test/env/vtophys/vtophys.o 00:04:56.446 LINK test_dma 00:04:56.446 CXX test/cpp_headers/bdev_module.o 00:04:56.446 CC test/app/histogram_perf/histogram_perf.o 00:04:56.446 LINK vtophys 00:04:56.446 CC test/app/jsoncat/jsoncat.o 00:04:56.446 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:56.446 CC app/trace_record/trace_record.o 00:04:56.446 LINK verify 00:04:56.446 CC examples/thread/thread/thread_ex.o 00:04:56.446 LINK histogram_perf 00:04:56.446 LINK jsoncat 00:04:56.446 CC app/nvmf_tgt/nvmf_main.o 00:04:56.446 CXX test/cpp_headers/bdev_zone.o 00:04:56.446 CXX test/cpp_headers/bit_array.o 00:04:56.706 CXX test/cpp_headers/bit_pool.o 00:04:56.706 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:56.706 CXX test/cpp_headers/blob_bdev.o 00:04:56.706 CXX test/cpp_headers/blobfs_bdev.o 00:04:56.706 LINK nvmf_tgt 00:04:56.706 LINK spdk_trace_record 00:04:56.706 LINK thread 00:04:56.706 LINK env_dpdk_post_init 00:04:56.706 LINK nvme_fuzz 00:04:56.966 CC app/iscsi_tgt/iscsi_tgt.o 00:04:56.966 CC app/spdk_lspci/spdk_lspci.o 00:04:56.966 CC app/spdk_tgt/spdk_tgt.o 00:04:56.966 CXX test/cpp_headers/blobfs.o 00:04:56.966 CC app/spdk_nvme_perf/perf.o 00:04:56.966 CC app/spdk_nvme_identify/identify.o 00:04:56.966 CC test/env/memory/memory_ut.o 00:04:56.966 LINK spdk_lspci 00:04:56.966 CC app/spdk_nvme_discover/discovery_aer.o 00:04:56.966 LINK iscsi_tgt 00:04:56.966 CXX test/cpp_headers/blob.o 00:04:56.966 LINK spdk_tgt 00:04:57.226 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:57.226 CC examples/sock/hello_world/hello_sock.o 00:04:57.226 CXX test/cpp_headers/conf.o 00:04:57.226 LINK spdk_nvme_discover 00:04:57.226 CXX test/cpp_headers/config.o 00:04:57.226 CC app/spdk_top/spdk_top.o 00:04:57.226 CXX test/cpp_headers/cpuset.o 00:04:57.485 CC examples/vmd/led/led.o 00:04:57.485 CC examples/vmd/lsvmd/lsvmd.o 00:04:57.485 CXX test/cpp_headers/crc16.o 00:04:57.485 LINK hello_sock 00:04:57.485 LINK lsvmd 00:04:57.485 LINK led 00:04:57.485 CXX test/cpp_headers/crc32.o 00:04:57.746 CXX test/cpp_headers/crc64.o 00:04:57.746 CC app/vhost/vhost.o 00:04:57.746 CC app/spdk_dd/spdk_dd.o 00:04:57.746 LINK spdk_nvme_identify 00:04:57.746 LINK spdk_nvme_perf 00:04:57.746 CC examples/idxd/perf/perf.o 00:04:57.746 CXX test/cpp_headers/dif.o 00:04:57.746 LINK memory_ut 00:04:58.005 LINK vhost 00:04:58.005 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:58.005 CXX test/cpp_headers/dma.o 00:04:58.005 CXX test/cpp_headers/endian.o 00:04:58.005 CXX test/cpp_headers/env_dpdk.o 00:04:58.265 CC test/env/pci/pci_ut.o 00:04:58.265 CC examples/accel/perf/accel_perf.o 00:04:58.265 LINK spdk_dd 00:04:58.265 LINK idxd_perf 00:04:58.265 CC test/app/stub/stub.o 00:04:58.265 CXX test/cpp_headers/env.o 00:04:58.265 LINK hello_fsdev 00:04:58.265 CC app/fio/nvme/fio_plugin.o 00:04:58.265 LINK spdk_top 00:04:58.265 CXX test/cpp_headers/event.o 00:04:58.265 CXX test/cpp_headers/fd_group.o 00:04:58.265 LINK stub 00:04:58.525 CXX test/cpp_headers/fd.o 00:04:58.525 CC app/fio/bdev/fio_plugin.o 00:04:58.525 CC test/event/event_perf/event_perf.o 00:04:58.525 CXX test/cpp_headers/file.o 00:04:58.525 LINK pci_ut 00:04:58.525 CC examples/nvme/hello_world/hello_world.o 00:04:58.525 CC examples/blob/hello_world/hello_blob.o 00:04:58.785 LINK accel_perf 00:04:58.785 LINK event_perf 00:04:58.785 CXX test/cpp_headers/fsdev.o 00:04:58.785 CC examples/nvme/reconnect/reconnect.o 00:04:58.785 CXX test/cpp_headers/fsdev_module.o 00:04:58.785 LINK hello_blob 00:04:58.785 LINK spdk_nvme 00:04:58.785 LINK hello_world 00:04:59.045 CXX test/cpp_headers/ftl.o 00:04:59.045 CC test/event/reactor/reactor.o 00:04:59.045 CC test/event/reactor_perf/reactor_perf.o 00:04:59.045 LINK iscsi_fuzz 00:04:59.045 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:59.045 CC test/nvme/aer/aer.o 00:04:59.045 LINK spdk_bdev 00:04:59.045 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:59.045 LINK reactor_perf 00:04:59.045 LINK reactor 00:04:59.045 LINK reconnect 00:04:59.045 CXX test/cpp_headers/fuse_dispatcher.o 00:04:59.045 CC examples/blob/cli/blobcli.o 00:04:59.045 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:59.306 CC examples/nvme/hotplug/hotplug.o 00:04:59.306 CC examples/nvme/arbitration/arbitration.o 00:04:59.306 CXX test/cpp_headers/gpt_spec.o 00:04:59.306 CC test/event/app_repeat/app_repeat.o 00:04:59.306 LINK aer 00:04:59.306 CC test/event/scheduler/scheduler.o 00:04:59.306 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:59.306 CXX test/cpp_headers/hexlify.o 00:04:59.567 LINK app_repeat 00:04:59.567 LINK hotplug 00:04:59.567 LINK arbitration 00:04:59.567 LINK cmb_copy 00:04:59.567 CXX test/cpp_headers/histogram_data.o 00:04:59.567 LINK scheduler 00:04:59.567 LINK vhost_fuzz 00:04:59.567 CC test/nvme/reset/reset.o 00:04:59.567 LINK nvme_manage 00:04:59.567 LINK blobcli 00:04:59.567 CXX test/cpp_headers/idxd.o 00:04:59.567 CXX test/cpp_headers/idxd_spec.o 00:04:59.567 CXX test/cpp_headers/init.o 00:04:59.826 CC examples/nvme/abort/abort.o 00:04:59.826 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:59.826 CXX test/cpp_headers/ioat.o 00:04:59.826 LINK reset 00:04:59.826 CXX test/cpp_headers/ioat_spec.o 00:04:59.826 CC test/nvme/sgl/sgl.o 00:04:59.826 CC test/accel/dif/dif.o 00:05:00.086 LINK pmr_persistence 00:05:00.086 CC test/blobfs/mkfs/mkfs.o 00:05:00.086 CC examples/bdev/hello_world/hello_bdev.o 00:05:00.086 CXX test/cpp_headers/iscsi_spec.o 00:05:00.086 CC test/lvol/esnap/esnap.o 00:05:00.086 CC test/nvme/e2edp/nvme_dp.o 00:05:00.086 CC examples/bdev/bdevperf/bdevperf.o 00:05:00.086 LINK mkfs 00:05:00.086 LINK abort 00:05:00.086 CXX test/cpp_headers/json.o 00:05:00.086 CC test/nvme/overhead/overhead.o 00:05:00.086 LINK sgl 00:05:00.345 LINK hello_bdev 00:05:00.345 CXX test/cpp_headers/jsonrpc.o 00:05:00.345 CXX test/cpp_headers/keyring.o 00:05:00.345 CXX test/cpp_headers/keyring_module.o 00:05:00.345 LINK nvme_dp 00:05:00.345 CC test/nvme/err_injection/err_injection.o 00:05:00.605 CC test/nvme/startup/startup.o 00:05:00.605 LINK overhead 00:05:00.605 CXX test/cpp_headers/likely.o 00:05:00.605 CC test/nvme/reserve/reserve.o 00:05:00.605 CC test/nvme/simple_copy/simple_copy.o 00:05:00.605 CC test/nvme/connect_stress/connect_stress.o 00:05:00.605 LINK err_injection 00:05:00.605 LINK startup 00:05:00.605 CXX test/cpp_headers/log.o 00:05:00.605 LINK dif 00:05:00.865 CC test/nvme/boot_partition/boot_partition.o 00:05:00.865 LINK connect_stress 00:05:00.865 LINK reserve 00:05:00.865 LINK simple_copy 00:05:00.865 CXX test/cpp_headers/lvol.o 00:05:00.865 LINK bdevperf 00:05:00.865 CC test/nvme/compliance/nvme_compliance.o 00:05:00.865 LINK boot_partition 00:05:00.865 CC test/nvme/fused_ordering/fused_ordering.o 00:05:01.126 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:01.126 CXX test/cpp_headers/md5.o 00:05:01.126 CC test/nvme/fdp/fdp.o 00:05:01.126 CC test/nvme/cuse/cuse.o 00:05:01.126 CXX test/cpp_headers/memory.o 00:05:01.126 LINK fused_ordering 00:05:01.126 LINK doorbell_aers 00:05:01.126 CC test/bdev/bdevio/bdevio.o 00:05:01.126 CC examples/nvmf/nvmf/nvmf.o 00:05:01.126 CXX test/cpp_headers/mmio.o 00:05:01.126 CXX test/cpp_headers/nbd.o 00:05:01.385 LINK nvme_compliance 00:05:01.385 CXX test/cpp_headers/net.o 00:05:01.385 CXX test/cpp_headers/notify.o 00:05:01.385 LINK fdp 00:05:01.385 CXX test/cpp_headers/nvme.o 00:05:01.385 CXX test/cpp_headers/nvme_intel.o 00:05:01.385 CXX test/cpp_headers/nvme_ocssd.o 00:05:01.385 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:01.385 CXX test/cpp_headers/nvme_spec.o 00:05:01.645 LINK nvmf 00:05:01.645 CXX test/cpp_headers/nvme_zns.o 00:05:01.645 LINK bdevio 00:05:01.645 CXX test/cpp_headers/nvmf_cmd.o 00:05:01.645 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:01.645 CXX test/cpp_headers/nvmf.o 00:05:01.645 CXX test/cpp_headers/nvmf_spec.o 00:05:01.645 CXX test/cpp_headers/nvmf_transport.o 00:05:01.645 CXX test/cpp_headers/opal.o 00:05:01.645 CXX test/cpp_headers/opal_spec.o 00:05:01.905 CXX test/cpp_headers/pci_ids.o 00:05:01.905 CXX test/cpp_headers/pipe.o 00:05:01.905 CXX test/cpp_headers/queue.o 00:05:01.905 CXX test/cpp_headers/reduce.o 00:05:01.905 CXX test/cpp_headers/rpc.o 00:05:01.905 CXX test/cpp_headers/scheduler.o 00:05:01.905 CXX test/cpp_headers/scsi.o 00:05:01.905 CXX test/cpp_headers/scsi_spec.o 00:05:01.905 CXX test/cpp_headers/sock.o 00:05:01.905 CXX test/cpp_headers/stdinc.o 00:05:01.905 CXX test/cpp_headers/string.o 00:05:01.905 CXX test/cpp_headers/thread.o 00:05:01.905 CXX test/cpp_headers/trace.o 00:05:01.905 CXX test/cpp_headers/trace_parser.o 00:05:02.165 CXX test/cpp_headers/tree.o 00:05:02.165 CXX test/cpp_headers/ublk.o 00:05:02.165 CXX test/cpp_headers/util.o 00:05:02.165 CXX test/cpp_headers/uuid.o 00:05:02.165 CXX test/cpp_headers/version.o 00:05:02.165 CXX test/cpp_headers/vfio_user_pci.o 00:05:02.165 CXX test/cpp_headers/vfio_user_spec.o 00:05:02.165 CXX test/cpp_headers/vhost.o 00:05:02.165 CXX test/cpp_headers/vmd.o 00:05:02.165 CXX test/cpp_headers/xor.o 00:05:02.165 CXX test/cpp_headers/zipf.o 00:05:02.425 LINK cuse 00:05:05.720 LINK esnap 00:05:05.978 ************************************ 00:05:05.978 END TEST make 00:05:05.978 ************************************ 00:05:05.978 00:05:05.978 real 1m24.854s 00:05:05.978 user 6m16.320s 00:05:05.978 sys 1m17.792s 00:05:05.978 04:03:05 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:05.978 04:03:05 make -- common/autotest_common.sh@10 -- $ set +x 00:05:05.978 04:03:05 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:05.978 04:03:05 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:05.978 04:03:05 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:05.978 04:03:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.978 04:03:05 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:05.978 04:03:05 -- pm/common@44 -- $ pid=6206 00:05:05.978 04:03:05 -- pm/common@50 -- $ kill -TERM 6206 00:05:05.978 04:03:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.978 04:03:05 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:05.978 04:03:05 -- pm/common@44 -- $ pid=6208 00:05:05.978 04:03:05 -- pm/common@50 -- $ kill -TERM 6208 00:05:05.978 04:03:05 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:05.978 04:03:05 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:06.239 04:03:06 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:06.239 04:03:06 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:06.239 04:03:06 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:06.239 04:03:06 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:06.239 04:03:06 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.239 04:03:06 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.239 04:03:06 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.239 04:03:06 -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.239 04:03:06 -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.239 04:03:06 -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.239 04:03:06 -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.239 04:03:06 -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.239 04:03:06 -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.239 04:03:06 -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.239 04:03:06 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.239 04:03:06 -- scripts/common.sh@344 -- # case "$op" in 00:05:06.239 04:03:06 -- scripts/common.sh@345 -- # : 1 00:05:06.239 04:03:06 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.239 04:03:06 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.239 04:03:06 -- scripts/common.sh@365 -- # decimal 1 00:05:06.239 04:03:06 -- scripts/common.sh@353 -- # local d=1 00:05:06.239 04:03:06 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.239 04:03:06 -- scripts/common.sh@355 -- # echo 1 00:05:06.239 04:03:06 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.239 04:03:06 -- scripts/common.sh@366 -- # decimal 2 00:05:06.239 04:03:06 -- scripts/common.sh@353 -- # local d=2 00:05:06.239 04:03:06 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.239 04:03:06 -- scripts/common.sh@355 -- # echo 2 00:05:06.239 04:03:06 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.239 04:03:06 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.239 04:03:06 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.239 04:03:06 -- scripts/common.sh@368 -- # return 0 00:05:06.239 04:03:06 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.239 04:03:06 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:06.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.239 --rc genhtml_branch_coverage=1 00:05:06.239 --rc genhtml_function_coverage=1 00:05:06.239 --rc genhtml_legend=1 00:05:06.239 --rc geninfo_all_blocks=1 00:05:06.239 --rc geninfo_unexecuted_blocks=1 00:05:06.239 00:05:06.239 ' 00:05:06.239 04:03:06 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:06.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.239 --rc genhtml_branch_coverage=1 00:05:06.239 --rc genhtml_function_coverage=1 00:05:06.239 --rc genhtml_legend=1 00:05:06.239 --rc geninfo_all_blocks=1 00:05:06.239 --rc geninfo_unexecuted_blocks=1 00:05:06.239 00:05:06.239 ' 00:05:06.239 04:03:06 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:06.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.239 --rc genhtml_branch_coverage=1 00:05:06.239 --rc genhtml_function_coverage=1 00:05:06.239 --rc genhtml_legend=1 00:05:06.239 --rc geninfo_all_blocks=1 00:05:06.239 --rc geninfo_unexecuted_blocks=1 00:05:06.239 00:05:06.239 ' 00:05:06.239 04:03:06 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:06.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.239 --rc genhtml_branch_coverage=1 00:05:06.239 --rc genhtml_function_coverage=1 00:05:06.239 --rc genhtml_legend=1 00:05:06.239 --rc geninfo_all_blocks=1 00:05:06.239 --rc geninfo_unexecuted_blocks=1 00:05:06.239 00:05:06.239 ' 00:05:06.239 04:03:06 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:06.239 04:03:06 -- nvmf/common.sh@7 -- # uname -s 00:05:06.239 04:03:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:06.239 04:03:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:06.239 04:03:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:06.239 04:03:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:06.239 04:03:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:06.239 04:03:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:06.239 04:03:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:06.239 04:03:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:06.239 04:03:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:06.239 04:03:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:06.239 04:03:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:155a028a-f143-454f-b8f9-8f0e571b807d 00:05:06.239 04:03:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=155a028a-f143-454f-b8f9-8f0e571b807d 00:05:06.239 04:03:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:06.239 04:03:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:06.239 04:03:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:06.239 04:03:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:06.239 04:03:06 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:06.239 04:03:06 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:06.239 04:03:06 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:06.239 04:03:06 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:06.239 04:03:06 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:06.239 04:03:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.239 04:03:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.239 04:03:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.239 04:03:06 -- paths/export.sh@5 -- # export PATH 00:05:06.239 04:03:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.239 04:03:06 -- nvmf/common.sh@51 -- # : 0 00:05:06.239 04:03:06 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:06.239 04:03:06 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:06.239 04:03:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:06.239 04:03:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:06.239 04:03:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:06.239 04:03:06 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:06.239 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:06.239 04:03:06 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:06.239 04:03:06 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:06.239 04:03:06 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:06.239 04:03:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:06.500 04:03:06 -- spdk/autotest.sh@32 -- # uname -s 00:05:06.500 04:03:06 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:06.500 04:03:06 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:06.500 04:03:06 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:06.500 04:03:06 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:06.500 04:03:06 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:06.500 04:03:06 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:06.500 04:03:06 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:06.500 04:03:06 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:06.500 04:03:06 -- spdk/autotest.sh@48 -- # udevadm_pid=66608 00:05:06.500 04:03:06 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:06.500 04:03:06 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:06.500 04:03:06 -- pm/common@17 -- # local monitor 00:05:06.500 04:03:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:06.500 04:03:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:06.500 04:03:06 -- pm/common@25 -- # sleep 1 00:05:06.500 04:03:06 -- pm/common@21 -- # date +%s 00:05:06.500 04:03:06 -- pm/common@21 -- # date +%s 00:05:06.500 04:03:06 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732161786 00:05:06.500 04:03:06 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732161786 00:05:06.500 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732161786_collect-vmstat.pm.log 00:05:06.500 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732161786_collect-cpu-load.pm.log 00:05:07.441 04:03:07 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:07.441 04:03:07 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:07.441 04:03:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:07.441 04:03:07 -- common/autotest_common.sh@10 -- # set +x 00:05:07.441 04:03:07 -- spdk/autotest.sh@59 -- # create_test_list 00:05:07.441 04:03:07 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:07.441 04:03:07 -- common/autotest_common.sh@10 -- # set +x 00:05:07.441 04:03:07 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:07.441 04:03:07 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:07.441 04:03:07 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:07.441 04:03:07 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:07.441 04:03:07 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:07.441 04:03:07 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:07.441 04:03:07 -- common/autotest_common.sh@1457 -- # uname 00:05:07.441 04:03:07 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:07.441 04:03:07 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:07.441 04:03:07 -- common/autotest_common.sh@1477 -- # uname 00:05:07.441 04:03:07 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:07.441 04:03:07 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:07.441 04:03:07 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:07.701 lcov: LCOV version 1.15 00:05:07.701 04:03:07 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:22.623 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:22.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:37.567 04:03:36 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:37.567 04:03:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:37.567 04:03:36 -- common/autotest_common.sh@10 -- # set +x 00:05:37.567 04:03:36 -- spdk/autotest.sh@78 -- # rm -f 00:05:37.567 04:03:36 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:37.567 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:37.567 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:37.567 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:37.567 04:03:37 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:37.567 04:03:37 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:37.567 04:03:37 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:37.567 04:03:37 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:37.567 04:03:37 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:37.567 04:03:37 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:37.567 04:03:37 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:37.567 04:03:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:37.567 04:03:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:37.567 04:03:37 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:37.567 04:03:37 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:05:37.567 04:03:37 -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:05:37.567 04:03:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:05:37.567 04:03:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:37.567 04:03:37 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:37.567 04:03:37 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:05:37.567 04:03:37 -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:05:37.567 04:03:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:05:37.567 04:03:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:37.567 04:03:37 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:37.567 04:03:37 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:37.567 04:03:37 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:37.567 04:03:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:37.567 04:03:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:37.567 04:03:37 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:37.567 04:03:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:37.567 04:03:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:37.567 04:03:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:37.567 04:03:37 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:37.567 04:03:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:37.567 No valid GPT data, bailing 00:05:37.567 04:03:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:37.567 04:03:37 -- scripts/common.sh@394 -- # pt= 00:05:37.567 04:03:37 -- scripts/common.sh@395 -- # return 1 00:05:37.567 04:03:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:37.567 1+0 records in 00:05:37.567 1+0 records out 00:05:37.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00650624 s, 161 MB/s 00:05:37.567 04:03:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:37.567 04:03:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:37.567 04:03:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n2 00:05:37.567 04:03:37 -- scripts/common.sh@381 -- # local block=/dev/nvme0n2 pt 00:05:37.567 04:03:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:05:37.567 No valid GPT data, bailing 00:05:37.567 04:03:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:05:37.567 04:03:37 -- scripts/common.sh@394 -- # pt= 00:05:37.567 04:03:37 -- scripts/common.sh@395 -- # return 1 00:05:37.567 04:03:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:05:37.567 1+0 records in 00:05:37.567 1+0 records out 00:05:37.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00409978 s, 256 MB/s 00:05:37.567 04:03:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:37.567 04:03:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:37.567 04:03:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n3 00:05:37.567 04:03:37 -- scripts/common.sh@381 -- # local block=/dev/nvme0n3 pt 00:05:37.567 04:03:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:05:37.567 No valid GPT data, bailing 00:05:37.567 04:03:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:05:37.827 04:03:37 -- scripts/common.sh@394 -- # pt= 00:05:37.827 04:03:37 -- scripts/common.sh@395 -- # return 1 00:05:37.827 04:03:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:05:37.827 1+0 records in 00:05:37.827 1+0 records out 00:05:37.827 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00656996 s, 160 MB/s 00:05:37.827 04:03:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:37.827 04:03:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:37.827 04:03:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:37.828 04:03:37 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:37.828 04:03:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:37.828 No valid GPT data, bailing 00:05:37.828 04:03:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:37.828 04:03:37 -- scripts/common.sh@394 -- # pt= 00:05:37.828 04:03:37 -- scripts/common.sh@395 -- # return 1 00:05:37.828 04:03:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:37.828 1+0 records in 00:05:37.828 1+0 records out 00:05:37.828 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00611076 s, 172 MB/s 00:05:37.828 04:03:37 -- spdk/autotest.sh@105 -- # sync 00:05:37.828 04:03:37 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:37.828 04:03:37 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:37.828 04:03:37 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:41.123 04:03:40 -- spdk/autotest.sh@111 -- # uname -s 00:05:41.123 04:03:40 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:41.123 04:03:40 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:41.123 04:03:40 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:41.723 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:41.723 Hugepages 00:05:41.723 node hugesize free / total 00:05:41.723 node0 1048576kB 0 / 0 00:05:41.723 node0 2048kB 0 / 0 00:05:41.723 00:05:41.723 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:41.723 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:41.723 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:41.998 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:41.998 04:03:41 -- spdk/autotest.sh@117 -- # uname -s 00:05:41.998 04:03:41 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:41.998 04:03:41 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:41.998 04:03:41 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:42.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:42.829 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.829 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.829 04:03:42 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:44.211 04:03:43 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:44.211 04:03:43 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:44.211 04:03:43 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:44.211 04:03:43 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:44.212 04:03:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:44.212 04:03:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:44.212 04:03:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:44.212 04:03:43 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:44.212 04:03:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:44.212 04:03:43 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:44.212 04:03:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:44.212 04:03:43 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:44.472 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:44.472 Waiting for block devices as requested 00:05:44.472 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:44.732 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:44.732 04:03:44 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:44.732 04:03:44 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:44.732 04:03:44 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:44.732 04:03:44 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:44.732 04:03:44 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:44.732 04:03:44 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:44.732 04:03:44 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:44.732 04:03:44 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:44.732 04:03:44 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:44.732 04:03:44 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:44.732 04:03:44 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:44.732 04:03:44 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:44.732 04:03:44 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:44.732 04:03:44 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:44.732 04:03:44 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:44.732 04:03:44 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:44.732 04:03:44 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:44.732 04:03:44 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:44.732 04:03:44 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:44.732 04:03:44 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:44.732 04:03:44 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:44.732 04:03:44 -- common/autotest_common.sh@1543 -- # continue 00:05:44.732 04:03:44 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:44.732 04:03:44 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:44.732 04:03:44 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:44.732 04:03:44 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:44.732 04:03:44 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:44.732 04:03:44 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:44.732 04:03:44 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:44.732 04:03:44 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:44.732 04:03:44 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:44.732 04:03:44 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:44.732 04:03:44 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:44.732 04:03:44 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:44.732 04:03:44 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:44.732 04:03:44 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:44.732 04:03:44 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:44.732 04:03:44 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:44.992 04:03:44 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:44.992 04:03:44 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:44.992 04:03:44 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:44.992 04:03:44 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:44.992 04:03:44 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:44.992 04:03:44 -- common/autotest_common.sh@1543 -- # continue 00:05:44.992 04:03:44 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:44.992 04:03:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:44.992 04:03:44 -- common/autotest_common.sh@10 -- # set +x 00:05:44.992 04:03:44 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:44.992 04:03:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:44.992 04:03:44 -- common/autotest_common.sh@10 -- # set +x 00:05:44.992 04:03:44 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:45.932 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:45.932 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:45.932 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:45.932 04:03:45 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:45.932 04:03:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:45.932 04:03:45 -- common/autotest_common.sh@10 -- # set +x 00:05:45.932 04:03:45 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:45.932 04:03:45 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:45.932 04:03:45 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:45.932 04:03:45 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:45.932 04:03:45 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:45.932 04:03:45 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:45.932 04:03:45 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:45.932 04:03:45 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:45.932 04:03:45 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:45.932 04:03:45 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:45.932 04:03:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:45.932 04:03:45 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:45.932 04:03:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:46.192 04:03:45 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:46.192 04:03:45 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:46.192 04:03:45 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:46.192 04:03:45 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:46.192 04:03:45 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:46.192 04:03:45 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:46.192 04:03:45 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:46.192 04:03:45 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:46.192 04:03:45 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:46.192 04:03:45 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:46.192 04:03:45 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:46.192 04:03:45 -- common/autotest_common.sh@1572 -- # return 0 00:05:46.192 04:03:45 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:46.192 04:03:45 -- common/autotest_common.sh@1580 -- # return 0 00:05:46.192 04:03:45 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:46.192 04:03:45 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:46.192 04:03:45 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:46.192 04:03:45 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:46.192 04:03:45 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:46.192 04:03:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:46.192 04:03:45 -- common/autotest_common.sh@10 -- # set +x 00:05:46.192 04:03:45 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:46.192 04:03:45 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:46.192 04:03:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.192 04:03:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.192 04:03:45 -- common/autotest_common.sh@10 -- # set +x 00:05:46.192 ************************************ 00:05:46.192 START TEST env 00:05:46.192 ************************************ 00:05:46.192 04:03:45 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:46.192 * Looking for test storage... 00:05:46.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:46.192 04:03:46 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.192 04:03:46 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.192 04:03:46 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.453 04:03:46 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.453 04:03:46 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.453 04:03:46 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.453 04:03:46 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.453 04:03:46 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.453 04:03:46 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.453 04:03:46 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.453 04:03:46 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.453 04:03:46 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.453 04:03:46 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.453 04:03:46 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.453 04:03:46 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.453 04:03:46 env -- scripts/common.sh@344 -- # case "$op" in 00:05:46.453 04:03:46 env -- scripts/common.sh@345 -- # : 1 00:05:46.453 04:03:46 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.453 04:03:46 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.453 04:03:46 env -- scripts/common.sh@365 -- # decimal 1 00:05:46.453 04:03:46 env -- scripts/common.sh@353 -- # local d=1 00:05:46.453 04:03:46 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.453 04:03:46 env -- scripts/common.sh@355 -- # echo 1 00:05:46.453 04:03:46 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.453 04:03:46 env -- scripts/common.sh@366 -- # decimal 2 00:05:46.453 04:03:46 env -- scripts/common.sh@353 -- # local d=2 00:05:46.453 04:03:46 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.453 04:03:46 env -- scripts/common.sh@355 -- # echo 2 00:05:46.453 04:03:46 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.453 04:03:46 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.453 04:03:46 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.453 04:03:46 env -- scripts/common.sh@368 -- # return 0 00:05:46.453 04:03:46 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.453 04:03:46 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.453 --rc genhtml_branch_coverage=1 00:05:46.453 --rc genhtml_function_coverage=1 00:05:46.453 --rc genhtml_legend=1 00:05:46.453 --rc geninfo_all_blocks=1 00:05:46.453 --rc geninfo_unexecuted_blocks=1 00:05:46.453 00:05:46.453 ' 00:05:46.453 04:03:46 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.453 --rc genhtml_branch_coverage=1 00:05:46.453 --rc genhtml_function_coverage=1 00:05:46.453 --rc genhtml_legend=1 00:05:46.453 --rc geninfo_all_blocks=1 00:05:46.453 --rc geninfo_unexecuted_blocks=1 00:05:46.453 00:05:46.453 ' 00:05:46.453 04:03:46 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.453 --rc genhtml_branch_coverage=1 00:05:46.453 --rc genhtml_function_coverage=1 00:05:46.453 --rc genhtml_legend=1 00:05:46.453 --rc geninfo_all_blocks=1 00:05:46.453 --rc geninfo_unexecuted_blocks=1 00:05:46.453 00:05:46.453 ' 00:05:46.453 04:03:46 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.453 --rc genhtml_branch_coverage=1 00:05:46.453 --rc genhtml_function_coverage=1 00:05:46.453 --rc genhtml_legend=1 00:05:46.453 --rc geninfo_all_blocks=1 00:05:46.453 --rc geninfo_unexecuted_blocks=1 00:05:46.453 00:05:46.453 ' 00:05:46.453 04:03:46 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:46.453 04:03:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.453 04:03:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.453 04:03:46 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.453 ************************************ 00:05:46.453 START TEST env_memory 00:05:46.453 ************************************ 00:05:46.453 04:03:46 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:46.453 00:05:46.453 00:05:46.453 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.453 http://cunit.sourceforge.net/ 00:05:46.453 00:05:46.453 00:05:46.453 Suite: memory 00:05:46.453 Test: alloc and free memory map ...[2024-11-21 04:03:46.307845] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:46.453 passed 00:05:46.453 Test: mem map translation ...[2024-11-21 04:03:46.348084] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:46.453 [2024-11-21 04:03:46.348127] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:46.453 [2024-11-21 04:03:46.348185] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:46.453 [2024-11-21 04:03:46.348204] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:46.453 passed 00:05:46.453 Test: mem map registration ...[2024-11-21 04:03:46.414855] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:46.453 [2024-11-21 04:03:46.414913] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:46.713 passed 00:05:46.713 Test: mem map adjacent registrations ...passed 00:05:46.713 00:05:46.713 Run Summary: Type Total Ran Passed Failed Inactive 00:05:46.713 suites 1 1 n/a 0 0 00:05:46.713 tests 4 4 4 0 0 00:05:46.713 asserts 152 152 152 0 n/a 00:05:46.713 00:05:46.713 Elapsed time = 0.235 seconds 00:05:46.713 00:05:46.713 real 0m0.286s 00:05:46.713 user 0m0.248s 00:05:46.713 sys 0m0.028s 00:05:46.713 04:03:46 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.713 04:03:46 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:46.713 ************************************ 00:05:46.713 END TEST env_memory 00:05:46.713 ************************************ 00:05:46.714 04:03:46 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:46.714 04:03:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.714 04:03:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.714 04:03:46 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.714 ************************************ 00:05:46.714 START TEST env_vtophys 00:05:46.714 ************************************ 00:05:46.714 04:03:46 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:46.714 EAL: lib.eal log level changed from notice to debug 00:05:46.714 EAL: Detected lcore 0 as core 0 on socket 0 00:05:46.714 EAL: Detected lcore 1 as core 0 on socket 0 00:05:46.714 EAL: Detected lcore 2 as core 0 on socket 0 00:05:46.714 EAL: Detected lcore 3 as core 0 on socket 0 00:05:46.714 EAL: Detected lcore 4 as core 0 on socket 0 00:05:46.714 EAL: Detected lcore 5 as core 0 on socket 0 00:05:46.714 EAL: Detected lcore 6 as core 0 on socket 0 00:05:46.714 EAL: Detected lcore 7 as core 0 on socket 0 00:05:46.714 EAL: Detected lcore 8 as core 0 on socket 0 00:05:46.714 EAL: Detected lcore 9 as core 0 on socket 0 00:05:46.714 EAL: Maximum logical cores by configuration: 128 00:05:46.714 EAL: Detected CPU lcores: 10 00:05:46.714 EAL: Detected NUMA nodes: 1 00:05:46.714 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:46.714 EAL: Detected shared linkage of DPDK 00:05:46.714 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:46.714 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:46.714 EAL: Registered [vdev] bus. 00:05:46.714 EAL: bus.vdev log level changed from disabled to notice 00:05:46.714 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:46.714 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:46.714 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:46.714 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:46.714 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:46.714 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:46.714 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:46.714 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:46.714 EAL: No shared files mode enabled, IPC will be disabled 00:05:46.714 EAL: No shared files mode enabled, IPC is disabled 00:05:46.714 EAL: Selected IOVA mode 'PA' 00:05:46.714 EAL: Probing VFIO support... 00:05:46.714 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:46.714 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:46.714 EAL: Ask a virtual area of 0x2e000 bytes 00:05:46.714 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:46.714 EAL: Setting up physically contiguous memory... 00:05:46.714 EAL: Setting maximum number of open files to 524288 00:05:46.714 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:46.714 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:46.714 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.714 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:46.714 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:46.714 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.714 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:46.714 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:46.714 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.714 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:46.714 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:46.714 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.714 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:46.714 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:46.714 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.714 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:46.714 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:46.714 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.714 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:46.714 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:46.714 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.714 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:46.714 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:46.714 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.714 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:46.714 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:46.714 EAL: Hugepages will be freed exactly as allocated. 00:05:46.714 EAL: No shared files mode enabled, IPC is disabled 00:05:46.714 EAL: No shared files mode enabled, IPC is disabled 00:05:46.974 EAL: TSC frequency is ~2290000 KHz 00:05:46.974 EAL: Main lcore 0 is ready (tid=7fc77382ca40;cpuset=[0]) 00:05:46.974 EAL: Trying to obtain current memory policy. 00:05:46.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.974 EAL: Restoring previous memory policy: 0 00:05:46.974 EAL: request: mp_malloc_sync 00:05:46.974 EAL: No shared files mode enabled, IPC is disabled 00:05:46.974 EAL: Heap on socket 0 was expanded by 2MB 00:05:46.974 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:46.974 EAL: No shared files mode enabled, IPC is disabled 00:05:46.974 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:46.974 EAL: Mem event callback 'spdk:(nil)' registered 00:05:46.974 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:46.974 00:05:46.974 00:05:46.974 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.974 http://cunit.sourceforge.net/ 00:05:46.974 00:05:46.974 00:05:46.974 Suite: components_suite 00:05:47.543 Test: vtophys_malloc_test ...passed 00:05:47.543 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:47.543 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.543 EAL: Restoring previous memory policy: 4 00:05:47.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.543 EAL: request: mp_malloc_sync 00:05:47.543 EAL: No shared files mode enabled, IPC is disabled 00:05:47.543 EAL: Heap on socket 0 was expanded by 4MB 00:05:47.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.543 EAL: request: mp_malloc_sync 00:05:47.543 EAL: No shared files mode enabled, IPC is disabled 00:05:47.543 EAL: Heap on socket 0 was shrunk by 4MB 00:05:47.543 EAL: Trying to obtain current memory policy. 00:05:47.543 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.543 EAL: Restoring previous memory policy: 4 00:05:47.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.543 EAL: request: mp_malloc_sync 00:05:47.543 EAL: No shared files mode enabled, IPC is disabled 00:05:47.543 EAL: Heap on socket 0 was expanded by 6MB 00:05:47.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.543 EAL: request: mp_malloc_sync 00:05:47.543 EAL: No shared files mode enabled, IPC is disabled 00:05:47.543 EAL: Heap on socket 0 was shrunk by 6MB 00:05:47.543 EAL: Trying to obtain current memory policy. 00:05:47.543 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.543 EAL: Restoring previous memory policy: 4 00:05:47.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.543 EAL: request: mp_malloc_sync 00:05:47.543 EAL: No shared files mode enabled, IPC is disabled 00:05:47.543 EAL: Heap on socket 0 was expanded by 10MB 00:05:47.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.543 EAL: request: mp_malloc_sync 00:05:47.543 EAL: No shared files mode enabled, IPC is disabled 00:05:47.543 EAL: Heap on socket 0 was shrunk by 10MB 00:05:47.543 EAL: Trying to obtain current memory policy. 00:05:47.543 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.543 EAL: Restoring previous memory policy: 4 00:05:47.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.543 EAL: request: mp_malloc_sync 00:05:47.543 EAL: No shared files mode enabled, IPC is disabled 00:05:47.543 EAL: Heap on socket 0 was expanded by 18MB 00:05:47.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.543 EAL: request: mp_malloc_sync 00:05:47.543 EAL: No shared files mode enabled, IPC is disabled 00:05:47.543 EAL: Heap on socket 0 was shrunk by 18MB 00:05:47.543 EAL: Trying to obtain current memory policy. 00:05:47.543 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.543 EAL: Restoring previous memory policy: 4 00:05:47.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.543 EAL: request: mp_malloc_sync 00:05:47.543 EAL: No shared files mode enabled, IPC is disabled 00:05:47.543 EAL: Heap on socket 0 was expanded by 34MB 00:05:47.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.543 EAL: request: mp_malloc_sync 00:05:47.543 EAL: No shared files mode enabled, IPC is disabled 00:05:47.543 EAL: Heap on socket 0 was shrunk by 34MB 00:05:47.543 EAL: Trying to obtain current memory policy. 00:05:47.543 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.543 EAL: Restoring previous memory policy: 4 00:05:47.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.543 EAL: request: mp_malloc_sync 00:05:47.543 EAL: No shared files mode enabled, IPC is disabled 00:05:47.543 EAL: Heap on socket 0 was expanded by 66MB 00:05:47.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.543 EAL: request: mp_malloc_sync 00:05:47.543 EAL: No shared files mode enabled, IPC is disabled 00:05:47.543 EAL: Heap on socket 0 was shrunk by 66MB 00:05:47.543 EAL: Trying to obtain current memory policy. 00:05:47.543 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.543 EAL: Restoring previous memory policy: 4 00:05:47.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.543 EAL: request: mp_malloc_sync 00:05:47.543 EAL: No shared files mode enabled, IPC is disabled 00:05:47.543 EAL: Heap on socket 0 was expanded by 130MB 00:05:47.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.543 EAL: request: mp_malloc_sync 00:05:47.543 EAL: No shared files mode enabled, IPC is disabled 00:05:47.543 EAL: Heap on socket 0 was shrunk by 130MB 00:05:47.543 EAL: Trying to obtain current memory policy. 00:05:47.543 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.802 EAL: Restoring previous memory policy: 4 00:05:47.802 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.802 EAL: request: mp_malloc_sync 00:05:47.802 EAL: No shared files mode enabled, IPC is disabled 00:05:47.802 EAL: Heap on socket 0 was expanded by 258MB 00:05:47.802 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.802 EAL: request: mp_malloc_sync 00:05:47.802 EAL: No shared files mode enabled, IPC is disabled 00:05:47.802 EAL: Heap on socket 0 was shrunk by 258MB 00:05:47.802 EAL: Trying to obtain current memory policy. 00:05:47.802 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.062 EAL: Restoring previous memory policy: 4 00:05:48.063 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.063 EAL: request: mp_malloc_sync 00:05:48.063 EAL: No shared files mode enabled, IPC is disabled 00:05:48.063 EAL: Heap on socket 0 was expanded by 514MB 00:05:48.323 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.323 EAL: request: mp_malloc_sync 00:05:48.323 EAL: No shared files mode enabled, IPC is disabled 00:05:48.323 EAL: Heap on socket 0 was shrunk by 514MB 00:05:48.323 EAL: Trying to obtain current memory policy. 00:05:48.323 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.893 EAL: Restoring previous memory policy: 4 00:05:48.893 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.893 EAL: request: mp_malloc_sync 00:05:48.893 EAL: No shared files mode enabled, IPC is disabled 00:05:48.893 EAL: Heap on socket 0 was expanded by 1026MB 00:05:49.153 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.413 passed 00:05:49.413 00:05:49.413 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.413 suites 1 1 n/a 0 0 00:05:49.413 tests 2 2 2 0 0 00:05:49.413 asserts 5155 5155 5155 0 n/a 00:05:49.413 00:05:49.413 Elapsed time = 2.380 seconds 00:05:49.413 EAL: request: mp_malloc_sync 00:05:49.413 EAL: No shared files mode enabled, IPC is disabled 00:05:49.413 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:49.413 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.413 EAL: request: mp_malloc_sync 00:05:49.413 EAL: No shared files mode enabled, IPC is disabled 00:05:49.413 EAL: Heap on socket 0 was shrunk by 2MB 00:05:49.413 EAL: No shared files mode enabled, IPC is disabled 00:05:49.413 EAL: No shared files mode enabled, IPC is disabled 00:05:49.413 EAL: No shared files mode enabled, IPC is disabled 00:05:49.413 00:05:49.413 real 0m2.648s 00:05:49.413 user 0m1.381s 00:05:49.413 sys 0m1.124s 00:05:49.413 04:03:49 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.413 04:03:49 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:49.413 ************************************ 00:05:49.413 END TEST env_vtophys 00:05:49.413 ************************************ 00:05:49.413 04:03:49 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:49.413 04:03:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.413 04:03:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.413 04:03:49 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.413 ************************************ 00:05:49.413 START TEST env_pci 00:05:49.413 ************************************ 00:05:49.413 04:03:49 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:49.413 00:05:49.413 00:05:49.413 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.413 http://cunit.sourceforge.net/ 00:05:49.413 00:05:49.413 00:05:49.413 Suite: pci 00:05:49.413 Test: pci_hook ...[2024-11-21 04:03:49.348677] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68868 has claimed it 00:05:49.413 passed 00:05:49.413 00:05:49.413 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.413 suites 1 1 n/a 0 0 00:05:49.413 tests 1 1 1 0 0 00:05:49.413 asserts 25 25 25 0 n/a 00:05:49.413 00:05:49.413 Elapsed time = 0.008 seconds 00:05:49.413 EAL: Cannot find device (10000:00:01.0) 00:05:49.413 EAL: Failed to attach device on primary process 00:05:49.673 00:05:49.673 real 0m0.093s 00:05:49.673 user 0m0.042s 00:05:49.673 sys 0m0.051s 00:05:49.673 04:03:49 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.673 04:03:49 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:49.673 ************************************ 00:05:49.673 END TEST env_pci 00:05:49.673 ************************************ 00:05:49.673 04:03:49 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:49.673 04:03:49 env -- env/env.sh@15 -- # uname 00:05:49.673 04:03:49 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:49.673 04:03:49 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:49.673 04:03:49 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:49.673 04:03:49 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:49.673 04:03:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.673 04:03:49 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.673 ************************************ 00:05:49.673 START TEST env_dpdk_post_init 00:05:49.673 ************************************ 00:05:49.673 04:03:49 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:49.673 EAL: Detected CPU lcores: 10 00:05:49.673 EAL: Detected NUMA nodes: 1 00:05:49.673 EAL: Detected shared linkage of DPDK 00:05:49.673 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:49.673 EAL: Selected IOVA mode 'PA' 00:05:49.933 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:49.933 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:49.933 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:49.933 Starting DPDK initialization... 00:05:49.933 Starting SPDK post initialization... 00:05:49.933 SPDK NVMe probe 00:05:49.933 Attaching to 0000:00:10.0 00:05:49.933 Attaching to 0000:00:11.0 00:05:49.933 Attached to 0000:00:10.0 00:05:49.933 Attached to 0000:00:11.0 00:05:49.933 Cleaning up... 00:05:49.933 00:05:49.933 real 0m0.251s 00:05:49.933 user 0m0.074s 00:05:49.933 sys 0m0.080s 00:05:49.933 04:03:49 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.933 04:03:49 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:49.933 ************************************ 00:05:49.933 END TEST env_dpdk_post_init 00:05:49.933 ************************************ 00:05:49.933 04:03:49 env -- env/env.sh@26 -- # uname 00:05:49.933 04:03:49 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:49.933 04:03:49 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:49.933 04:03:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.933 04:03:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.933 04:03:49 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.933 ************************************ 00:05:49.933 START TEST env_mem_callbacks 00:05:49.933 ************************************ 00:05:49.933 04:03:49 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:49.933 EAL: Detected CPU lcores: 10 00:05:49.933 EAL: Detected NUMA nodes: 1 00:05:49.933 EAL: Detected shared linkage of DPDK 00:05:49.933 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:49.933 EAL: Selected IOVA mode 'PA' 00:05:50.192 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:50.192 00:05:50.192 00:05:50.192 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.192 http://cunit.sourceforge.net/ 00:05:50.192 00:05:50.192 00:05:50.192 Suite: memory 00:05:50.192 Test: test ... 00:05:50.192 register 0x200000200000 2097152 00:05:50.192 malloc 3145728 00:05:50.192 register 0x200000400000 4194304 00:05:50.192 buf 0x200000500000 len 3145728 PASSED 00:05:50.192 malloc 64 00:05:50.192 buf 0x2000004fff40 len 64 PASSED 00:05:50.192 malloc 4194304 00:05:50.192 register 0x200000800000 6291456 00:05:50.192 buf 0x200000a00000 len 4194304 PASSED 00:05:50.192 free 0x200000500000 3145728 00:05:50.192 free 0x2000004fff40 64 00:05:50.192 unregister 0x200000400000 4194304 PASSED 00:05:50.192 free 0x200000a00000 4194304 00:05:50.192 unregister 0x200000800000 6291456 PASSED 00:05:50.192 malloc 8388608 00:05:50.192 register 0x200000400000 10485760 00:05:50.192 buf 0x200000600000 len 8388608 PASSED 00:05:50.192 free 0x200000600000 8388608 00:05:50.192 unregister 0x200000400000 10485760 PASSED 00:05:50.192 passed 00:05:50.192 00:05:50.192 Run Summary: Type Total Ran Passed Failed Inactive 00:05:50.192 suites 1 1 n/a 0 0 00:05:50.192 tests 1 1 1 0 0 00:05:50.192 asserts 15 15 15 0 n/a 00:05:50.192 00:05:50.192 Elapsed time = 0.013 seconds 00:05:50.192 00:05:50.192 real 0m0.188s 00:05:50.192 user 0m0.029s 00:05:50.192 sys 0m0.058s 00:05:50.192 04:03:50 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.192 ************************************ 00:05:50.192 END TEST env_mem_callbacks 00:05:50.192 ************************************ 00:05:50.192 04:03:50 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:50.192 ************************************ 00:05:50.192 END TEST env 00:05:50.193 ************************************ 00:05:50.193 00:05:50.193 real 0m4.072s 00:05:50.193 user 0m2.003s 00:05:50.193 sys 0m1.729s 00:05:50.193 04:03:50 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.193 04:03:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:50.193 04:03:50 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:50.193 04:03:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.193 04:03:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.193 04:03:50 -- common/autotest_common.sh@10 -- # set +x 00:05:50.193 ************************************ 00:05:50.193 START TEST rpc 00:05:50.193 ************************************ 00:05:50.193 04:03:50 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:50.452 * Looking for test storage... 00:05:50.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:50.452 04:03:50 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:50.452 04:03:50 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:50.452 04:03:50 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:50.452 04:03:50 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:50.452 04:03:50 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.452 04:03:50 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.452 04:03:50 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.452 04:03:50 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.452 04:03:50 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.452 04:03:50 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.452 04:03:50 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.452 04:03:50 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.452 04:03:50 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.452 04:03:50 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.452 04:03:50 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.452 04:03:50 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:50.452 04:03:50 rpc -- scripts/common.sh@345 -- # : 1 00:05:50.452 04:03:50 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.452 04:03:50 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.452 04:03:50 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:50.452 04:03:50 rpc -- scripts/common.sh@353 -- # local d=1 00:05:50.452 04:03:50 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.452 04:03:50 rpc -- scripts/common.sh@355 -- # echo 1 00:05:50.452 04:03:50 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.452 04:03:50 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:50.452 04:03:50 rpc -- scripts/common.sh@353 -- # local d=2 00:05:50.452 04:03:50 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.452 04:03:50 rpc -- scripts/common.sh@355 -- # echo 2 00:05:50.452 04:03:50 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.452 04:03:50 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.452 04:03:50 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.452 04:03:50 rpc -- scripts/common.sh@368 -- # return 0 00:05:50.452 04:03:50 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.452 04:03:50 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:50.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.452 --rc genhtml_branch_coverage=1 00:05:50.452 --rc genhtml_function_coverage=1 00:05:50.452 --rc genhtml_legend=1 00:05:50.452 --rc geninfo_all_blocks=1 00:05:50.452 --rc geninfo_unexecuted_blocks=1 00:05:50.452 00:05:50.452 ' 00:05:50.452 04:03:50 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:50.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.452 --rc genhtml_branch_coverage=1 00:05:50.452 --rc genhtml_function_coverage=1 00:05:50.452 --rc genhtml_legend=1 00:05:50.452 --rc geninfo_all_blocks=1 00:05:50.452 --rc geninfo_unexecuted_blocks=1 00:05:50.452 00:05:50.452 ' 00:05:50.452 04:03:50 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:50.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.452 --rc genhtml_branch_coverage=1 00:05:50.452 --rc genhtml_function_coverage=1 00:05:50.452 --rc genhtml_legend=1 00:05:50.453 --rc geninfo_all_blocks=1 00:05:50.453 --rc geninfo_unexecuted_blocks=1 00:05:50.453 00:05:50.453 ' 00:05:50.453 04:03:50 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:50.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.453 --rc genhtml_branch_coverage=1 00:05:50.453 --rc genhtml_function_coverage=1 00:05:50.453 --rc genhtml_legend=1 00:05:50.453 --rc geninfo_all_blocks=1 00:05:50.453 --rc geninfo_unexecuted_blocks=1 00:05:50.453 00:05:50.453 ' 00:05:50.453 04:03:50 rpc -- rpc/rpc.sh@65 -- # spdk_pid=68995 00:05:50.453 04:03:50 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:50.453 04:03:50 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.453 04:03:50 rpc -- rpc/rpc.sh@67 -- # waitforlisten 68995 00:05:50.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.453 04:03:50 rpc -- common/autotest_common.sh@835 -- # '[' -z 68995 ']' 00:05:50.453 04:03:50 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.453 04:03:50 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.453 04:03:50 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.453 04:03:50 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.453 04:03:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.712 [2024-11-21 04:03:50.464848] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:05:50.712 [2024-11-21 04:03:50.465056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68995 ] 00:05:50.712 [2024-11-21 04:03:50.621739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.712 [2024-11-21 04:03:50.662738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:50.712 [2024-11-21 04:03:50.662801] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 68995' to capture a snapshot of events at runtime. 00:05:50.712 [2024-11-21 04:03:50.662814] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:50.712 [2024-11-21 04:03:50.662823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:50.712 [2024-11-21 04:03:50.662833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid68995 for offline analysis/debug. 00:05:50.712 [2024-11-21 04:03:50.663320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.651 04:03:51 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.651 04:03:51 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:51.651 04:03:51 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:51.651 04:03:51 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:51.651 04:03:51 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:51.651 04:03:51 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:51.651 04:03:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.651 04:03:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.651 04:03:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.651 ************************************ 00:05:51.651 START TEST rpc_integrity 00:05:51.651 ************************************ 00:05:51.651 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:51.651 04:03:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:51.651 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.651 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.651 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.651 04:03:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:51.651 04:03:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:51.651 04:03:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:51.651 04:03:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:51.651 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.651 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.651 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.651 04:03:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:51.651 04:03:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:51.651 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.651 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.651 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.651 04:03:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:51.651 { 00:05:51.651 "name": "Malloc0", 00:05:51.651 "aliases": [ 00:05:51.651 "1a9b5804-d314-4254-a4dc-790a13171958" 00:05:51.651 ], 00:05:51.651 "product_name": "Malloc disk", 00:05:51.651 "block_size": 512, 00:05:51.651 "num_blocks": 16384, 00:05:51.651 "uuid": "1a9b5804-d314-4254-a4dc-790a13171958", 00:05:51.651 "assigned_rate_limits": { 00:05:51.651 "rw_ios_per_sec": 0, 00:05:51.651 "rw_mbytes_per_sec": 0, 00:05:51.651 "r_mbytes_per_sec": 0, 00:05:51.651 "w_mbytes_per_sec": 0 00:05:51.651 }, 00:05:51.651 "claimed": false, 00:05:51.651 "zoned": false, 00:05:51.651 "supported_io_types": { 00:05:51.651 "read": true, 00:05:51.651 "write": true, 00:05:51.651 "unmap": true, 00:05:51.651 "flush": true, 00:05:51.651 "reset": true, 00:05:51.651 "nvme_admin": false, 00:05:51.651 "nvme_io": false, 00:05:51.651 "nvme_io_md": false, 00:05:51.651 "write_zeroes": true, 00:05:51.651 "zcopy": true, 00:05:51.651 "get_zone_info": false, 00:05:51.651 "zone_management": false, 00:05:51.651 "zone_append": false, 00:05:51.651 "compare": false, 00:05:51.651 "compare_and_write": false, 00:05:51.651 "abort": true, 00:05:51.651 "seek_hole": false, 00:05:51.651 "seek_data": false, 00:05:51.651 "copy": true, 00:05:51.651 "nvme_iov_md": false 00:05:51.651 }, 00:05:51.651 "memory_domains": [ 00:05:51.651 { 00:05:51.651 "dma_device_id": "system", 00:05:51.651 "dma_device_type": 1 00:05:51.651 }, 00:05:51.651 { 00:05:51.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.651 "dma_device_type": 2 00:05:51.651 } 00:05:51.651 ], 00:05:51.651 "driver_specific": {} 00:05:51.651 } 00:05:51.651 ]' 00:05:51.651 04:03:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:51.651 04:03:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:51.651 04:03:51 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:51.651 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.651 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.651 [2024-11-21 04:03:51.462277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:51.651 [2024-11-21 04:03:51.462342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:51.651 [2024-11-21 04:03:51.462401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:05:51.651 [2024-11-21 04:03:51.462413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:51.651 [2024-11-21 04:03:51.465258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:51.651 [2024-11-21 04:03:51.465297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:51.651 Passthru0 00:05:51.651 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.651 04:03:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:51.651 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.651 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.651 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.651 04:03:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:51.651 { 00:05:51.651 "name": "Malloc0", 00:05:51.651 "aliases": [ 00:05:51.651 "1a9b5804-d314-4254-a4dc-790a13171958" 00:05:51.651 ], 00:05:51.651 "product_name": "Malloc disk", 00:05:51.651 "block_size": 512, 00:05:51.651 "num_blocks": 16384, 00:05:51.651 "uuid": "1a9b5804-d314-4254-a4dc-790a13171958", 00:05:51.651 "assigned_rate_limits": { 00:05:51.651 "rw_ios_per_sec": 0, 00:05:51.651 "rw_mbytes_per_sec": 0, 00:05:51.651 "r_mbytes_per_sec": 0, 00:05:51.651 "w_mbytes_per_sec": 0 00:05:51.651 }, 00:05:51.651 "claimed": true, 00:05:51.651 "claim_type": "exclusive_write", 00:05:51.651 "zoned": false, 00:05:51.651 "supported_io_types": { 00:05:51.651 "read": true, 00:05:51.651 "write": true, 00:05:51.651 "unmap": true, 00:05:51.651 "flush": true, 00:05:51.651 "reset": true, 00:05:51.651 "nvme_admin": false, 00:05:51.651 "nvme_io": false, 00:05:51.651 "nvme_io_md": false, 00:05:51.651 "write_zeroes": true, 00:05:51.651 "zcopy": true, 00:05:51.651 "get_zone_info": false, 00:05:51.651 "zone_management": false, 00:05:51.651 "zone_append": false, 00:05:51.651 "compare": false, 00:05:51.651 "compare_and_write": false, 00:05:51.651 "abort": true, 00:05:51.651 "seek_hole": false, 00:05:51.651 "seek_data": false, 00:05:51.651 "copy": true, 00:05:51.651 "nvme_iov_md": false 00:05:51.651 }, 00:05:51.651 "memory_domains": [ 00:05:51.651 { 00:05:51.651 "dma_device_id": "system", 00:05:51.651 "dma_device_type": 1 00:05:51.651 }, 00:05:51.651 { 00:05:51.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.651 "dma_device_type": 2 00:05:51.651 } 00:05:51.651 ], 00:05:51.651 "driver_specific": {} 00:05:51.651 }, 00:05:51.651 { 00:05:51.651 "name": "Passthru0", 00:05:51.651 "aliases": [ 00:05:51.651 "73deddc5-85a1-516d-8255-2b8d364d6876" 00:05:51.651 ], 00:05:51.651 "product_name": "passthru", 00:05:51.651 "block_size": 512, 00:05:51.651 "num_blocks": 16384, 00:05:51.651 "uuid": "73deddc5-85a1-516d-8255-2b8d364d6876", 00:05:51.651 "assigned_rate_limits": { 00:05:51.651 "rw_ios_per_sec": 0, 00:05:51.651 "rw_mbytes_per_sec": 0, 00:05:51.651 "r_mbytes_per_sec": 0, 00:05:51.651 "w_mbytes_per_sec": 0 00:05:51.651 }, 00:05:51.651 "claimed": false, 00:05:51.651 "zoned": false, 00:05:51.651 "supported_io_types": { 00:05:51.651 "read": true, 00:05:51.651 "write": true, 00:05:51.651 "unmap": true, 00:05:51.651 "flush": true, 00:05:51.651 "reset": true, 00:05:51.651 "nvme_admin": false, 00:05:51.651 "nvme_io": false, 00:05:51.651 "nvme_io_md": false, 00:05:51.651 "write_zeroes": true, 00:05:51.651 "zcopy": true, 00:05:51.651 "get_zone_info": false, 00:05:51.651 "zone_management": false, 00:05:51.651 "zone_append": false, 00:05:51.651 "compare": false, 00:05:51.651 "compare_and_write": false, 00:05:51.651 "abort": true, 00:05:51.651 "seek_hole": false, 00:05:51.651 "seek_data": false, 00:05:51.651 "copy": true, 00:05:51.651 "nvme_iov_md": false 00:05:51.651 }, 00:05:51.651 "memory_domains": [ 00:05:51.651 { 00:05:51.651 "dma_device_id": "system", 00:05:51.651 "dma_device_type": 1 00:05:51.651 }, 00:05:51.651 { 00:05:51.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.651 "dma_device_type": 2 00:05:51.652 } 00:05:51.652 ], 00:05:51.652 "driver_specific": { 00:05:51.652 "passthru": { 00:05:51.652 "name": "Passthru0", 00:05:51.652 "base_bdev_name": "Malloc0" 00:05:51.652 } 00:05:51.652 } 00:05:51.652 } 00:05:51.652 ]' 00:05:51.652 04:03:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:51.652 04:03:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:51.652 04:03:51 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:51.652 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.652 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.652 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.652 04:03:51 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:51.652 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.652 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.652 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.652 04:03:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:51.652 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.652 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.652 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.652 04:03:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:51.652 04:03:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:51.652 04:03:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:51.652 00:05:51.652 real 0m0.300s 00:05:51.652 user 0m0.174s 00:05:51.652 sys 0m0.048s 00:05:51.652 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.652 04:03:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.652 ************************************ 00:05:51.652 END TEST rpc_integrity 00:05:51.652 ************************************ 00:05:51.912 04:03:51 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:51.912 04:03:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.912 04:03:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.912 04:03:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.912 ************************************ 00:05:51.912 START TEST rpc_plugins 00:05:51.912 ************************************ 00:05:51.912 04:03:51 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:51.912 04:03:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:51.912 04:03:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.912 04:03:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:51.912 04:03:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.912 04:03:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:51.912 04:03:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:51.912 04:03:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.912 04:03:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:51.912 04:03:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.912 04:03:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:51.912 { 00:05:51.912 "name": "Malloc1", 00:05:51.912 "aliases": [ 00:05:51.912 "90cc654f-e1d4-421f-9507-507d09e4bcc9" 00:05:51.912 ], 00:05:51.912 "product_name": "Malloc disk", 00:05:51.912 "block_size": 4096, 00:05:51.912 "num_blocks": 256, 00:05:51.912 "uuid": "90cc654f-e1d4-421f-9507-507d09e4bcc9", 00:05:51.912 "assigned_rate_limits": { 00:05:51.912 "rw_ios_per_sec": 0, 00:05:51.912 "rw_mbytes_per_sec": 0, 00:05:51.912 "r_mbytes_per_sec": 0, 00:05:51.912 "w_mbytes_per_sec": 0 00:05:51.912 }, 00:05:51.912 "claimed": false, 00:05:51.912 "zoned": false, 00:05:51.912 "supported_io_types": { 00:05:51.912 "read": true, 00:05:51.912 "write": true, 00:05:51.912 "unmap": true, 00:05:51.912 "flush": true, 00:05:51.912 "reset": true, 00:05:51.912 "nvme_admin": false, 00:05:51.912 "nvme_io": false, 00:05:51.912 "nvme_io_md": false, 00:05:51.912 "write_zeroes": true, 00:05:51.912 "zcopy": true, 00:05:51.912 "get_zone_info": false, 00:05:51.912 "zone_management": false, 00:05:51.912 "zone_append": false, 00:05:51.912 "compare": false, 00:05:51.912 "compare_and_write": false, 00:05:51.912 "abort": true, 00:05:51.912 "seek_hole": false, 00:05:51.912 "seek_data": false, 00:05:51.912 "copy": true, 00:05:51.912 "nvme_iov_md": false 00:05:51.912 }, 00:05:51.912 "memory_domains": [ 00:05:51.912 { 00:05:51.912 "dma_device_id": "system", 00:05:51.912 "dma_device_type": 1 00:05:51.912 }, 00:05:51.912 { 00:05:51.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.912 "dma_device_type": 2 00:05:51.912 } 00:05:51.912 ], 00:05:51.912 "driver_specific": {} 00:05:51.912 } 00:05:51.912 ]' 00:05:51.912 04:03:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:51.912 04:03:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:51.912 04:03:51 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:51.912 04:03:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.912 04:03:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:51.912 04:03:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.912 04:03:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:51.912 04:03:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.912 04:03:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:51.912 04:03:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.912 04:03:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:51.912 04:03:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:51.912 ************************************ 00:05:51.912 END TEST rpc_plugins 00:05:51.912 ************************************ 00:05:51.912 04:03:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:51.912 00:05:51.912 real 0m0.171s 00:05:51.912 user 0m0.095s 00:05:51.912 sys 0m0.033s 00:05:51.912 04:03:51 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.912 04:03:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:52.174 04:03:51 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:52.174 04:03:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.174 04:03:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.174 04:03:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.174 ************************************ 00:05:52.174 START TEST rpc_trace_cmd_test 00:05:52.174 ************************************ 00:05:52.174 04:03:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:52.174 04:03:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:52.174 04:03:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:52.174 04:03:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.174 04:03:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:52.174 04:03:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.174 04:03:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:52.174 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid68995", 00:05:52.174 "tpoint_group_mask": "0x8", 00:05:52.174 "iscsi_conn": { 00:05:52.174 "mask": "0x2", 00:05:52.174 "tpoint_mask": "0x0" 00:05:52.174 }, 00:05:52.174 "scsi": { 00:05:52.174 "mask": "0x4", 00:05:52.174 "tpoint_mask": "0x0" 00:05:52.174 }, 00:05:52.174 "bdev": { 00:05:52.174 "mask": "0x8", 00:05:52.174 "tpoint_mask": "0xffffffffffffffff" 00:05:52.174 }, 00:05:52.174 "nvmf_rdma": { 00:05:52.174 "mask": "0x10", 00:05:52.174 "tpoint_mask": "0x0" 00:05:52.174 }, 00:05:52.174 "nvmf_tcp": { 00:05:52.174 "mask": "0x20", 00:05:52.174 "tpoint_mask": "0x0" 00:05:52.174 }, 00:05:52.174 "ftl": { 00:05:52.174 "mask": "0x40", 00:05:52.174 "tpoint_mask": "0x0" 00:05:52.174 }, 00:05:52.174 "blobfs": { 00:05:52.174 "mask": "0x80", 00:05:52.174 "tpoint_mask": "0x0" 00:05:52.174 }, 00:05:52.174 "dsa": { 00:05:52.174 "mask": "0x200", 00:05:52.174 "tpoint_mask": "0x0" 00:05:52.174 }, 00:05:52.174 "thread": { 00:05:52.174 "mask": "0x400", 00:05:52.174 "tpoint_mask": "0x0" 00:05:52.174 }, 00:05:52.174 "nvme_pcie": { 00:05:52.174 "mask": "0x800", 00:05:52.175 "tpoint_mask": "0x0" 00:05:52.175 }, 00:05:52.175 "iaa": { 00:05:52.175 "mask": "0x1000", 00:05:52.175 "tpoint_mask": "0x0" 00:05:52.175 }, 00:05:52.175 "nvme_tcp": { 00:05:52.175 "mask": "0x2000", 00:05:52.175 "tpoint_mask": "0x0" 00:05:52.175 }, 00:05:52.175 "bdev_nvme": { 00:05:52.175 "mask": "0x4000", 00:05:52.175 "tpoint_mask": "0x0" 00:05:52.175 }, 00:05:52.175 "sock": { 00:05:52.175 "mask": "0x8000", 00:05:52.175 "tpoint_mask": "0x0" 00:05:52.175 }, 00:05:52.175 "blob": { 00:05:52.175 "mask": "0x10000", 00:05:52.175 "tpoint_mask": "0x0" 00:05:52.175 }, 00:05:52.175 "bdev_raid": { 00:05:52.175 "mask": "0x20000", 00:05:52.175 "tpoint_mask": "0x0" 00:05:52.175 }, 00:05:52.175 "scheduler": { 00:05:52.175 "mask": "0x40000", 00:05:52.175 "tpoint_mask": "0x0" 00:05:52.175 } 00:05:52.175 }' 00:05:52.175 04:03:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:52.175 04:03:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:52.175 04:03:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:52.175 04:03:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:52.175 04:03:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:52.175 04:03:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:52.175 04:03:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:52.175 04:03:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:52.175 04:03:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:52.448 04:03:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:52.448 00:05:52.448 real 0m0.260s 00:05:52.448 user 0m0.200s 00:05:52.448 sys 0m0.050s 00:05:52.448 04:03:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.448 04:03:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:52.448 ************************************ 00:05:52.448 END TEST rpc_trace_cmd_test 00:05:52.448 ************************************ 00:05:52.448 04:03:52 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:52.448 04:03:52 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:52.448 04:03:52 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:52.448 04:03:52 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.448 04:03:52 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.448 04:03:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.448 ************************************ 00:05:52.448 START TEST rpc_daemon_integrity 00:05:52.448 ************************************ 00:05:52.448 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:52.448 04:03:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:52.448 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.448 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.448 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.448 04:03:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:52.448 04:03:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:52.448 04:03:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:52.448 04:03:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:52.448 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.448 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.448 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.448 04:03:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:52.448 04:03:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:52.448 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.448 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.448 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.448 04:03:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:52.448 { 00:05:52.448 "name": "Malloc2", 00:05:52.448 "aliases": [ 00:05:52.448 "56db3bd7-9796-4a97-8a1b-904fcfbc2bdd" 00:05:52.448 ], 00:05:52.448 "product_name": "Malloc disk", 00:05:52.448 "block_size": 512, 00:05:52.448 "num_blocks": 16384, 00:05:52.448 "uuid": "56db3bd7-9796-4a97-8a1b-904fcfbc2bdd", 00:05:52.448 "assigned_rate_limits": { 00:05:52.448 "rw_ios_per_sec": 0, 00:05:52.448 "rw_mbytes_per_sec": 0, 00:05:52.448 "r_mbytes_per_sec": 0, 00:05:52.448 "w_mbytes_per_sec": 0 00:05:52.448 }, 00:05:52.448 "claimed": false, 00:05:52.448 "zoned": false, 00:05:52.448 "supported_io_types": { 00:05:52.448 "read": true, 00:05:52.448 "write": true, 00:05:52.448 "unmap": true, 00:05:52.448 "flush": true, 00:05:52.448 "reset": true, 00:05:52.448 "nvme_admin": false, 00:05:52.448 "nvme_io": false, 00:05:52.448 "nvme_io_md": false, 00:05:52.448 "write_zeroes": true, 00:05:52.448 "zcopy": true, 00:05:52.448 "get_zone_info": false, 00:05:52.448 "zone_management": false, 00:05:52.448 "zone_append": false, 00:05:52.448 "compare": false, 00:05:52.448 "compare_and_write": false, 00:05:52.448 "abort": true, 00:05:52.448 "seek_hole": false, 00:05:52.448 "seek_data": false, 00:05:52.448 "copy": true, 00:05:52.448 "nvme_iov_md": false 00:05:52.448 }, 00:05:52.448 "memory_domains": [ 00:05:52.448 { 00:05:52.448 "dma_device_id": "system", 00:05:52.448 "dma_device_type": 1 00:05:52.448 }, 00:05:52.449 { 00:05:52.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:52.449 "dma_device_type": 2 00:05:52.449 } 00:05:52.449 ], 00:05:52.449 "driver_specific": {} 00:05:52.449 } 00:05:52.449 ]' 00:05:52.449 04:03:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:52.449 04:03:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:52.449 04:03:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:52.449 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.449 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.449 [2024-11-21 04:03:52.369352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:52.449 [2024-11-21 04:03:52.369418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:52.449 [2024-11-21 04:03:52.369450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:05:52.449 [2024-11-21 04:03:52.369461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:52.449 [2024-11-21 04:03:52.372273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:52.449 [2024-11-21 04:03:52.372309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:52.449 Passthru0 00:05:52.449 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.449 04:03:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:52.449 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.449 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.449 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.449 04:03:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:52.449 { 00:05:52.449 "name": "Malloc2", 00:05:52.449 "aliases": [ 00:05:52.449 "56db3bd7-9796-4a97-8a1b-904fcfbc2bdd" 00:05:52.449 ], 00:05:52.449 "product_name": "Malloc disk", 00:05:52.449 "block_size": 512, 00:05:52.449 "num_blocks": 16384, 00:05:52.449 "uuid": "56db3bd7-9796-4a97-8a1b-904fcfbc2bdd", 00:05:52.449 "assigned_rate_limits": { 00:05:52.449 "rw_ios_per_sec": 0, 00:05:52.449 "rw_mbytes_per_sec": 0, 00:05:52.449 "r_mbytes_per_sec": 0, 00:05:52.449 "w_mbytes_per_sec": 0 00:05:52.449 }, 00:05:52.449 "claimed": true, 00:05:52.449 "claim_type": "exclusive_write", 00:05:52.449 "zoned": false, 00:05:52.449 "supported_io_types": { 00:05:52.449 "read": true, 00:05:52.449 "write": true, 00:05:52.449 "unmap": true, 00:05:52.449 "flush": true, 00:05:52.449 "reset": true, 00:05:52.449 "nvme_admin": false, 00:05:52.449 "nvme_io": false, 00:05:52.449 "nvme_io_md": false, 00:05:52.449 "write_zeroes": true, 00:05:52.449 "zcopy": true, 00:05:52.449 "get_zone_info": false, 00:05:52.449 "zone_management": false, 00:05:52.449 "zone_append": false, 00:05:52.449 "compare": false, 00:05:52.449 "compare_and_write": false, 00:05:52.449 "abort": true, 00:05:52.449 "seek_hole": false, 00:05:52.449 "seek_data": false, 00:05:52.449 "copy": true, 00:05:52.449 "nvme_iov_md": false 00:05:52.449 }, 00:05:52.449 "memory_domains": [ 00:05:52.449 { 00:05:52.449 "dma_device_id": "system", 00:05:52.449 "dma_device_type": 1 00:05:52.449 }, 00:05:52.449 { 00:05:52.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:52.449 "dma_device_type": 2 00:05:52.449 } 00:05:52.449 ], 00:05:52.449 "driver_specific": {} 00:05:52.449 }, 00:05:52.449 { 00:05:52.449 "name": "Passthru0", 00:05:52.449 "aliases": [ 00:05:52.449 "c82785bb-d697-5acc-a999-8dc23b135231" 00:05:52.449 ], 00:05:52.449 "product_name": "passthru", 00:05:52.449 "block_size": 512, 00:05:52.449 "num_blocks": 16384, 00:05:52.449 "uuid": "c82785bb-d697-5acc-a999-8dc23b135231", 00:05:52.449 "assigned_rate_limits": { 00:05:52.449 "rw_ios_per_sec": 0, 00:05:52.449 "rw_mbytes_per_sec": 0, 00:05:52.449 "r_mbytes_per_sec": 0, 00:05:52.449 "w_mbytes_per_sec": 0 00:05:52.449 }, 00:05:52.449 "claimed": false, 00:05:52.449 "zoned": false, 00:05:52.449 "supported_io_types": { 00:05:52.449 "read": true, 00:05:52.449 "write": true, 00:05:52.449 "unmap": true, 00:05:52.449 "flush": true, 00:05:52.449 "reset": true, 00:05:52.449 "nvme_admin": false, 00:05:52.449 "nvme_io": false, 00:05:52.449 "nvme_io_md": false, 00:05:52.449 "write_zeroes": true, 00:05:52.449 "zcopy": true, 00:05:52.449 "get_zone_info": false, 00:05:52.449 "zone_management": false, 00:05:52.449 "zone_append": false, 00:05:52.449 "compare": false, 00:05:52.449 "compare_and_write": false, 00:05:52.449 "abort": true, 00:05:52.449 "seek_hole": false, 00:05:52.449 "seek_data": false, 00:05:52.449 "copy": true, 00:05:52.449 "nvme_iov_md": false 00:05:52.449 }, 00:05:52.449 "memory_domains": [ 00:05:52.449 { 00:05:52.449 "dma_device_id": "system", 00:05:52.449 "dma_device_type": 1 00:05:52.449 }, 00:05:52.449 { 00:05:52.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:52.449 "dma_device_type": 2 00:05:52.449 } 00:05:52.449 ], 00:05:52.449 "driver_specific": { 00:05:52.449 "passthru": { 00:05:52.449 "name": "Passthru0", 00:05:52.449 "base_bdev_name": "Malloc2" 00:05:52.449 } 00:05:52.449 } 00:05:52.449 } 00:05:52.449 ]' 00:05:52.449 04:03:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:52.720 04:03:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:52.720 04:03:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:52.720 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.720 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.720 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.720 04:03:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:52.720 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.720 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.720 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.720 04:03:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:52.720 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.720 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.720 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.720 04:03:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:52.720 04:03:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:52.720 ************************************ 00:05:52.720 END TEST rpc_daemon_integrity 00:05:52.720 ************************************ 00:05:52.720 04:03:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:52.720 00:05:52.720 real 0m0.308s 00:05:52.720 user 0m0.182s 00:05:52.720 sys 0m0.054s 00:05:52.720 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.720 04:03:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.720 04:03:52 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:52.720 04:03:52 rpc -- rpc/rpc.sh@84 -- # killprocess 68995 00:05:52.720 04:03:52 rpc -- common/autotest_common.sh@954 -- # '[' -z 68995 ']' 00:05:52.720 04:03:52 rpc -- common/autotest_common.sh@958 -- # kill -0 68995 00:05:52.720 04:03:52 rpc -- common/autotest_common.sh@959 -- # uname 00:05:52.720 04:03:52 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.720 04:03:52 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68995 00:05:52.720 killing process with pid 68995 00:05:52.720 04:03:52 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.721 04:03:52 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.721 04:03:52 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68995' 00:05:52.721 04:03:52 rpc -- common/autotest_common.sh@973 -- # kill 68995 00:05:52.721 04:03:52 rpc -- common/autotest_common.sh@978 -- # wait 68995 00:05:53.289 00:05:53.289 real 0m3.090s 00:05:53.289 user 0m3.483s 00:05:53.289 sys 0m1.019s 00:05:53.289 ************************************ 00:05:53.289 END TEST rpc 00:05:53.289 ************************************ 00:05:53.289 04:03:53 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.289 04:03:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.548 04:03:53 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:53.548 04:03:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.549 04:03:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.549 04:03:53 -- common/autotest_common.sh@10 -- # set +x 00:05:53.549 ************************************ 00:05:53.549 START TEST skip_rpc 00:05:53.549 ************************************ 00:05:53.549 04:03:53 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:53.549 * Looking for test storage... 00:05:53.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:53.549 04:03:53 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.549 04:03:53 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.549 04:03:53 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:53.549 04:03:53 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.549 04:03:53 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:53.549 04:03:53 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.549 04:03:53 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:53.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.549 --rc genhtml_branch_coverage=1 00:05:53.549 --rc genhtml_function_coverage=1 00:05:53.549 --rc genhtml_legend=1 00:05:53.549 --rc geninfo_all_blocks=1 00:05:53.549 --rc geninfo_unexecuted_blocks=1 00:05:53.549 00:05:53.549 ' 00:05:53.549 04:03:53 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:53.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.549 --rc genhtml_branch_coverage=1 00:05:53.549 --rc genhtml_function_coverage=1 00:05:53.549 --rc genhtml_legend=1 00:05:53.549 --rc geninfo_all_blocks=1 00:05:53.549 --rc geninfo_unexecuted_blocks=1 00:05:53.549 00:05:53.549 ' 00:05:53.549 04:03:53 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:53.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.549 --rc genhtml_branch_coverage=1 00:05:53.549 --rc genhtml_function_coverage=1 00:05:53.549 --rc genhtml_legend=1 00:05:53.549 --rc geninfo_all_blocks=1 00:05:53.549 --rc geninfo_unexecuted_blocks=1 00:05:53.549 00:05:53.549 ' 00:05:53.549 04:03:53 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:53.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.549 --rc genhtml_branch_coverage=1 00:05:53.549 --rc genhtml_function_coverage=1 00:05:53.549 --rc genhtml_legend=1 00:05:53.549 --rc geninfo_all_blocks=1 00:05:53.549 --rc geninfo_unexecuted_blocks=1 00:05:53.549 00:05:53.549 ' 00:05:53.808 04:03:53 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:53.808 04:03:53 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:53.809 04:03:53 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:53.809 04:03:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.809 04:03:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.809 04:03:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.809 ************************************ 00:05:53.809 START TEST skip_rpc 00:05:53.809 ************************************ 00:05:53.809 04:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:53.809 04:03:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69202 00:05:53.809 04:03:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:53.809 04:03:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.809 04:03:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:53.809 [2024-11-21 04:03:53.639001] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:05:53.809 [2024-11-21 04:03:53.639262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69202 ] 00:05:54.068 [2024-11-21 04:03:53.795887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.068 [2024-11-21 04:03:53.837356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69202 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 69202 ']' 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 69202 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69202 00:05:59.347 killing process with pid 69202 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69202' 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 69202 00:05:59.347 04:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 69202 00:05:59.347 ************************************ 00:05:59.347 END TEST skip_rpc 00:05:59.347 ************************************ 00:05:59.347 00:05:59.347 real 0m5.668s 00:05:59.347 user 0m5.113s 00:05:59.347 sys 0m0.480s 00:05:59.347 04:03:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.347 04:03:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.347 04:03:59 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:59.347 04:03:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.347 04:03:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.347 04:03:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.347 ************************************ 00:05:59.347 START TEST skip_rpc_with_json 00:05:59.347 ************************************ 00:05:59.347 04:03:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:59.347 04:03:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:59.347 04:03:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69290 00:05:59.347 04:03:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.347 04:03:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.347 04:03:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69290 00:05:59.347 04:03:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 69290 ']' 00:05:59.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.347 04:03:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.347 04:03:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.347 04:03:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.347 04:03:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.347 04:03:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:59.606 [2024-11-21 04:03:59.376727] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:05:59.606 [2024-11-21 04:03:59.376894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69290 ] 00:05:59.606 [2024-11-21 04:03:59.529183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.606 [2024-11-21 04:03:59.569818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.547 04:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.547 04:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:00.547 04:04:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:00.547 04:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.547 04:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.547 [2024-11-21 04:04:00.182941] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:00.547 request: 00:06:00.547 { 00:06:00.547 "trtype": "tcp", 00:06:00.547 "method": "nvmf_get_transports", 00:06:00.547 "req_id": 1 00:06:00.547 } 00:06:00.547 Got JSON-RPC error response 00:06:00.547 response: 00:06:00.547 { 00:06:00.547 "code": -19, 00:06:00.547 "message": "No such device" 00:06:00.547 } 00:06:00.547 04:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:00.547 04:04:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:00.547 04:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.547 04:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.547 [2024-11-21 04:04:00.195094] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:00.547 04:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.547 04:04:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:00.547 04:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.547 04:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.547 04:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.547 04:04:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:00.547 { 00:06:00.547 "subsystems": [ 00:06:00.547 { 00:06:00.547 "subsystem": "fsdev", 00:06:00.547 "config": [ 00:06:00.547 { 00:06:00.547 "method": "fsdev_set_opts", 00:06:00.547 "params": { 00:06:00.547 "fsdev_io_pool_size": 65535, 00:06:00.547 "fsdev_io_cache_size": 256 00:06:00.547 } 00:06:00.547 } 00:06:00.547 ] 00:06:00.547 }, 00:06:00.547 { 00:06:00.547 "subsystem": "keyring", 00:06:00.547 "config": [] 00:06:00.547 }, 00:06:00.547 { 00:06:00.547 "subsystem": "iobuf", 00:06:00.547 "config": [ 00:06:00.547 { 00:06:00.547 "method": "iobuf_set_options", 00:06:00.547 "params": { 00:06:00.547 "small_pool_count": 8192, 00:06:00.547 "large_pool_count": 1024, 00:06:00.547 "small_bufsize": 8192, 00:06:00.547 "large_bufsize": 135168, 00:06:00.547 "enable_numa": false 00:06:00.547 } 00:06:00.547 } 00:06:00.547 ] 00:06:00.547 }, 00:06:00.547 { 00:06:00.547 "subsystem": "sock", 00:06:00.547 "config": [ 00:06:00.547 { 00:06:00.547 "method": "sock_set_default_impl", 00:06:00.547 "params": { 00:06:00.547 "impl_name": "posix" 00:06:00.547 } 00:06:00.547 }, 00:06:00.547 { 00:06:00.547 "method": "sock_impl_set_options", 00:06:00.547 "params": { 00:06:00.547 "impl_name": "ssl", 00:06:00.547 "recv_buf_size": 4096, 00:06:00.547 "send_buf_size": 4096, 00:06:00.547 "enable_recv_pipe": true, 00:06:00.547 "enable_quickack": false, 00:06:00.547 "enable_placement_id": 0, 00:06:00.547 "enable_zerocopy_send_server": true, 00:06:00.547 "enable_zerocopy_send_client": false, 00:06:00.547 "zerocopy_threshold": 0, 00:06:00.547 "tls_version": 0, 00:06:00.547 "enable_ktls": false 00:06:00.547 } 00:06:00.547 }, 00:06:00.547 { 00:06:00.547 "method": "sock_impl_set_options", 00:06:00.547 "params": { 00:06:00.547 "impl_name": "posix", 00:06:00.547 "recv_buf_size": 2097152, 00:06:00.547 "send_buf_size": 2097152, 00:06:00.547 "enable_recv_pipe": true, 00:06:00.547 "enable_quickack": false, 00:06:00.547 "enable_placement_id": 0, 00:06:00.547 "enable_zerocopy_send_server": true, 00:06:00.547 "enable_zerocopy_send_client": false, 00:06:00.547 "zerocopy_threshold": 0, 00:06:00.547 "tls_version": 0, 00:06:00.547 "enable_ktls": false 00:06:00.547 } 00:06:00.547 } 00:06:00.547 ] 00:06:00.547 }, 00:06:00.547 { 00:06:00.547 "subsystem": "vmd", 00:06:00.547 "config": [] 00:06:00.547 }, 00:06:00.547 { 00:06:00.547 "subsystem": "accel", 00:06:00.547 "config": [ 00:06:00.547 { 00:06:00.547 "method": "accel_set_options", 00:06:00.547 "params": { 00:06:00.547 "small_cache_size": 128, 00:06:00.547 "large_cache_size": 16, 00:06:00.547 "task_count": 2048, 00:06:00.547 "sequence_count": 2048, 00:06:00.547 "buf_count": 2048 00:06:00.547 } 00:06:00.547 } 00:06:00.547 ] 00:06:00.547 }, 00:06:00.547 { 00:06:00.547 "subsystem": "bdev", 00:06:00.547 "config": [ 00:06:00.547 { 00:06:00.547 "method": "bdev_set_options", 00:06:00.547 "params": { 00:06:00.547 "bdev_io_pool_size": 65535, 00:06:00.547 "bdev_io_cache_size": 256, 00:06:00.547 "bdev_auto_examine": true, 00:06:00.547 "iobuf_small_cache_size": 128, 00:06:00.547 "iobuf_large_cache_size": 16 00:06:00.547 } 00:06:00.547 }, 00:06:00.547 { 00:06:00.547 "method": "bdev_raid_set_options", 00:06:00.547 "params": { 00:06:00.547 "process_window_size_kb": 1024, 00:06:00.547 "process_max_bandwidth_mb_sec": 0 00:06:00.547 } 00:06:00.547 }, 00:06:00.547 { 00:06:00.547 "method": "bdev_iscsi_set_options", 00:06:00.547 "params": { 00:06:00.547 "timeout_sec": 30 00:06:00.547 } 00:06:00.547 }, 00:06:00.547 { 00:06:00.547 "method": "bdev_nvme_set_options", 00:06:00.547 "params": { 00:06:00.547 "action_on_timeout": "none", 00:06:00.547 "timeout_us": 0, 00:06:00.547 "timeout_admin_us": 0, 00:06:00.547 "keep_alive_timeout_ms": 10000, 00:06:00.547 "arbitration_burst": 0, 00:06:00.547 "low_priority_weight": 0, 00:06:00.547 "medium_priority_weight": 0, 00:06:00.547 "high_priority_weight": 0, 00:06:00.547 "nvme_adminq_poll_period_us": 10000, 00:06:00.547 "nvme_ioq_poll_period_us": 0, 00:06:00.547 "io_queue_requests": 0, 00:06:00.547 "delay_cmd_submit": true, 00:06:00.547 "transport_retry_count": 4, 00:06:00.547 "bdev_retry_count": 3, 00:06:00.547 "transport_ack_timeout": 0, 00:06:00.547 "ctrlr_loss_timeout_sec": 0, 00:06:00.547 "reconnect_delay_sec": 0, 00:06:00.547 "fast_io_fail_timeout_sec": 0, 00:06:00.547 "disable_auto_failback": false, 00:06:00.547 "generate_uuids": false, 00:06:00.547 "transport_tos": 0, 00:06:00.547 "nvme_error_stat": false, 00:06:00.547 "rdma_srq_size": 0, 00:06:00.547 "io_path_stat": false, 00:06:00.547 "allow_accel_sequence": false, 00:06:00.547 "rdma_max_cq_size": 0, 00:06:00.547 "rdma_cm_event_timeout_ms": 0, 00:06:00.547 "dhchap_digests": [ 00:06:00.547 "sha256", 00:06:00.547 "sha384", 00:06:00.547 "sha512" 00:06:00.547 ], 00:06:00.547 "dhchap_dhgroups": [ 00:06:00.547 "null", 00:06:00.547 "ffdhe2048", 00:06:00.547 "ffdhe3072", 00:06:00.547 "ffdhe4096", 00:06:00.547 "ffdhe6144", 00:06:00.547 "ffdhe8192" 00:06:00.547 ] 00:06:00.547 } 00:06:00.547 }, 00:06:00.547 { 00:06:00.547 "method": "bdev_nvme_set_hotplug", 00:06:00.547 "params": { 00:06:00.547 "period_us": 100000, 00:06:00.547 "enable": false 00:06:00.547 } 00:06:00.547 }, 00:06:00.547 { 00:06:00.547 "method": "bdev_wait_for_examine" 00:06:00.547 } 00:06:00.547 ] 00:06:00.547 }, 00:06:00.547 { 00:06:00.547 "subsystem": "scsi", 00:06:00.547 "config": null 00:06:00.547 }, 00:06:00.547 { 00:06:00.547 "subsystem": "scheduler", 00:06:00.547 "config": [ 00:06:00.547 { 00:06:00.547 "method": "framework_set_scheduler", 00:06:00.547 "params": { 00:06:00.547 "name": "static" 00:06:00.547 } 00:06:00.547 } 00:06:00.547 ] 00:06:00.547 }, 00:06:00.547 { 00:06:00.547 "subsystem": "vhost_scsi", 00:06:00.547 "config": [] 00:06:00.547 }, 00:06:00.547 { 00:06:00.547 "subsystem": "vhost_blk", 00:06:00.547 "config": [] 00:06:00.547 }, 00:06:00.547 { 00:06:00.547 "subsystem": "ublk", 00:06:00.547 "config": [] 00:06:00.547 }, 00:06:00.547 { 00:06:00.547 "subsystem": "nbd", 00:06:00.547 "config": [] 00:06:00.547 }, 00:06:00.547 { 00:06:00.547 "subsystem": "nvmf", 00:06:00.547 "config": [ 00:06:00.547 { 00:06:00.547 "method": "nvmf_set_config", 00:06:00.547 "params": { 00:06:00.547 "discovery_filter": "match_any", 00:06:00.547 "admin_cmd_passthru": { 00:06:00.547 "identify_ctrlr": false 00:06:00.547 }, 00:06:00.547 "dhchap_digests": [ 00:06:00.547 "sha256", 00:06:00.547 "sha384", 00:06:00.547 "sha512" 00:06:00.547 ], 00:06:00.547 "dhchap_dhgroups": [ 00:06:00.547 "null", 00:06:00.547 "ffdhe2048", 00:06:00.547 "ffdhe3072", 00:06:00.547 "ffdhe4096", 00:06:00.547 "ffdhe6144", 00:06:00.547 "ffdhe8192" 00:06:00.547 ] 00:06:00.547 } 00:06:00.547 }, 00:06:00.547 { 00:06:00.547 "method": "nvmf_set_max_subsystems", 00:06:00.548 "params": { 00:06:00.548 "max_subsystems": 1024 00:06:00.548 } 00:06:00.548 }, 00:06:00.548 { 00:06:00.548 "method": "nvmf_set_crdt", 00:06:00.548 "params": { 00:06:00.548 "crdt1": 0, 00:06:00.548 "crdt2": 0, 00:06:00.548 "crdt3": 0 00:06:00.548 } 00:06:00.548 }, 00:06:00.548 { 00:06:00.548 "method": "nvmf_create_transport", 00:06:00.548 "params": { 00:06:00.548 "trtype": "TCP", 00:06:00.548 "max_queue_depth": 128, 00:06:00.548 "max_io_qpairs_per_ctrlr": 127, 00:06:00.548 "in_capsule_data_size": 4096, 00:06:00.548 "max_io_size": 131072, 00:06:00.548 "io_unit_size": 131072, 00:06:00.548 "max_aq_depth": 128, 00:06:00.548 "num_shared_buffers": 511, 00:06:00.548 "buf_cache_size": 4294967295, 00:06:00.548 "dif_insert_or_strip": false, 00:06:00.548 "zcopy": false, 00:06:00.548 "c2h_success": true, 00:06:00.548 "sock_priority": 0, 00:06:00.548 "abort_timeout_sec": 1, 00:06:00.548 "ack_timeout": 0, 00:06:00.548 "data_wr_pool_size": 0 00:06:00.548 } 00:06:00.548 } 00:06:00.548 ] 00:06:00.548 }, 00:06:00.548 { 00:06:00.548 "subsystem": "iscsi", 00:06:00.548 "config": [ 00:06:00.548 { 00:06:00.548 "method": "iscsi_set_options", 00:06:00.548 "params": { 00:06:00.548 "node_base": "iqn.2016-06.io.spdk", 00:06:00.548 "max_sessions": 128, 00:06:00.548 "max_connections_per_session": 2, 00:06:00.548 "max_queue_depth": 64, 00:06:00.548 "default_time2wait": 2, 00:06:00.548 "default_time2retain": 20, 00:06:00.548 "first_burst_length": 8192, 00:06:00.548 "immediate_data": true, 00:06:00.548 "allow_duplicated_isid": false, 00:06:00.548 "error_recovery_level": 0, 00:06:00.548 "nop_timeout": 60, 00:06:00.548 "nop_in_interval": 30, 00:06:00.548 "disable_chap": false, 00:06:00.548 "require_chap": false, 00:06:00.548 "mutual_chap": false, 00:06:00.548 "chap_group": 0, 00:06:00.548 "max_large_datain_per_connection": 64, 00:06:00.548 "max_r2t_per_connection": 4, 00:06:00.548 "pdu_pool_size": 36864, 00:06:00.548 "immediate_data_pool_size": 16384, 00:06:00.548 "data_out_pool_size": 2048 00:06:00.548 } 00:06:00.548 } 00:06:00.548 ] 00:06:00.548 } 00:06:00.548 ] 00:06:00.548 } 00:06:00.548 04:04:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:00.548 04:04:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69290 00:06:00.548 04:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69290 ']' 00:06:00.548 04:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69290 00:06:00.548 04:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:00.548 04:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.548 04:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69290 00:06:00.548 killing process with pid 69290 00:06:00.548 04:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.548 04:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.548 04:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69290' 00:06:00.548 04:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69290 00:06:00.548 04:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69290 00:06:01.119 04:04:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69324 00:06:01.119 04:04:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:01.119 04:04:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:06.401 04:04:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69324 00:06:06.401 04:04:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69324 ']' 00:06:06.401 04:04:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69324 00:06:06.401 04:04:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:06.401 04:04:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.401 04:04:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69324 00:06:06.401 04:04:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.401 04:04:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.401 killing process with pid 69324 00:06:06.401 04:04:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69324' 00:06:06.401 04:04:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69324 00:06:06.401 04:04:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69324 00:06:06.972 04:04:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:06.972 04:04:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:06.972 00:06:06.972 real 0m7.413s 00:06:06.972 user 0m6.694s 00:06:06.972 sys 0m0.991s 00:06:06.972 04:04:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.972 04:04:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:06.972 ************************************ 00:06:06.972 END TEST skip_rpc_with_json 00:06:06.972 ************************************ 00:06:06.972 04:04:06 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:06.972 04:04:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.972 04:04:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.972 04:04:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.972 ************************************ 00:06:06.972 START TEST skip_rpc_with_delay 00:06:06.972 ************************************ 00:06:06.972 04:04:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:06.972 04:04:06 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:06.972 04:04:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:06.972 04:04:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:06.972 04:04:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:06.972 04:04:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.972 04:04:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:06.972 04:04:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.972 04:04:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:06.972 04:04:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.972 04:04:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:06.972 04:04:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:06.972 04:04:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:06.972 [2024-11-21 04:04:06.865388] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:06.972 04:04:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:06.972 04:04:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:06.972 04:04:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:06.972 04:04:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:06.972 00:06:06.972 real 0m0.171s 00:06:06.972 user 0m0.089s 00:06:06.972 sys 0m0.081s 00:06:06.972 ************************************ 00:06:06.972 END TEST skip_rpc_with_delay 00:06:06.972 ************************************ 00:06:06.972 04:04:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.972 04:04:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:07.232 04:04:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:07.232 04:04:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:07.232 04:04:06 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:07.232 04:04:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.232 04:04:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.232 04:04:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.232 ************************************ 00:06:07.232 START TEST exit_on_failed_rpc_init 00:06:07.232 ************************************ 00:06:07.232 04:04:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:07.232 04:04:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69430 00:06:07.232 04:04:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.232 04:04:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69430 00:06:07.232 04:04:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 69430 ']' 00:06:07.232 04:04:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.232 04:04:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.232 04:04:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.232 04:04:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.232 04:04:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:07.232 [2024-11-21 04:04:07.117921] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:07.232 [2024-11-21 04:04:07.118086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69430 ] 00:06:07.492 [2024-11-21 04:04:07.259133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.492 [2024-11-21 04:04:07.298207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.061 04:04:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.061 04:04:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:08.061 04:04:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.061 04:04:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:08.062 04:04:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:08.062 04:04:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:08.062 04:04:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.062 04:04:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.062 04:04:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.062 04:04:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.062 04:04:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.062 04:04:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.062 04:04:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.062 04:04:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:08.062 04:04:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:08.322 [2024-11-21 04:04:08.036720] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:08.322 [2024-11-21 04:04:08.037308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69448 ] 00:06:08.322 [2024-11-21 04:04:08.192674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.322 [2024-11-21 04:04:08.218496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.322 [2024-11-21 04:04:08.218594] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:08.322 [2024-11-21 04:04:08.218616] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:08.322 [2024-11-21 04:04:08.218626] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:08.592 04:04:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:08.592 04:04:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:08.592 04:04:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:08.592 04:04:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:08.592 04:04:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:08.592 04:04:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:08.592 04:04:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:08.592 04:04:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69430 00:06:08.592 04:04:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 69430 ']' 00:06:08.592 04:04:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 69430 00:06:08.592 04:04:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:08.592 04:04:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.592 04:04:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69430 00:06:08.592 04:04:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.592 04:04:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.592 killing process with pid 69430 00:06:08.592 04:04:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69430' 00:06:08.592 04:04:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 69430 00:06:08.592 04:04:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 69430 00:06:09.203 00:06:09.203 real 0m1.942s 00:06:09.203 user 0m1.907s 00:06:09.203 sys 0m0.636s 00:06:09.203 04:04:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.203 04:04:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:09.203 ************************************ 00:06:09.203 END TEST exit_on_failed_rpc_init 00:06:09.203 ************************************ 00:06:09.203 04:04:09 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:09.203 00:06:09.203 real 0m15.720s 00:06:09.203 user 0m14.009s 00:06:09.203 sys 0m2.512s 00:06:09.203 04:04:09 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.203 04:04:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.203 ************************************ 00:06:09.203 END TEST skip_rpc 00:06:09.203 ************************************ 00:06:09.203 04:04:09 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:09.203 04:04:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.203 04:04:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.203 04:04:09 -- common/autotest_common.sh@10 -- # set +x 00:06:09.203 ************************************ 00:06:09.203 START TEST rpc_client 00:06:09.203 ************************************ 00:06:09.203 04:04:09 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:09.462 * Looking for test storage... 00:06:09.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:09.462 04:04:09 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:09.462 04:04:09 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:09.462 04:04:09 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:09.462 04:04:09 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.462 04:04:09 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:09.462 04:04:09 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.462 04:04:09 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:09.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.462 --rc genhtml_branch_coverage=1 00:06:09.462 --rc genhtml_function_coverage=1 00:06:09.462 --rc genhtml_legend=1 00:06:09.462 --rc geninfo_all_blocks=1 00:06:09.462 --rc geninfo_unexecuted_blocks=1 00:06:09.462 00:06:09.462 ' 00:06:09.462 04:04:09 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:09.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.462 --rc genhtml_branch_coverage=1 00:06:09.462 --rc genhtml_function_coverage=1 00:06:09.462 --rc genhtml_legend=1 00:06:09.462 --rc geninfo_all_blocks=1 00:06:09.462 --rc geninfo_unexecuted_blocks=1 00:06:09.462 00:06:09.462 ' 00:06:09.462 04:04:09 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:09.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.462 --rc genhtml_branch_coverage=1 00:06:09.462 --rc genhtml_function_coverage=1 00:06:09.462 --rc genhtml_legend=1 00:06:09.462 --rc geninfo_all_blocks=1 00:06:09.462 --rc geninfo_unexecuted_blocks=1 00:06:09.462 00:06:09.462 ' 00:06:09.462 04:04:09 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:09.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.462 --rc genhtml_branch_coverage=1 00:06:09.462 --rc genhtml_function_coverage=1 00:06:09.462 --rc genhtml_legend=1 00:06:09.462 --rc geninfo_all_blocks=1 00:06:09.462 --rc geninfo_unexecuted_blocks=1 00:06:09.462 00:06:09.462 ' 00:06:09.462 04:04:09 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:09.462 OK 00:06:09.462 04:04:09 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:09.462 00:06:09.462 real 0m0.300s 00:06:09.462 user 0m0.150s 00:06:09.462 sys 0m0.167s 00:06:09.462 04:04:09 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.462 04:04:09 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:09.462 ************************************ 00:06:09.462 END TEST rpc_client 00:06:09.462 ************************************ 00:06:09.722 04:04:09 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:09.722 04:04:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.722 04:04:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.722 04:04:09 -- common/autotest_common.sh@10 -- # set +x 00:06:09.722 ************************************ 00:06:09.722 START TEST json_config 00:06:09.722 ************************************ 00:06:09.722 04:04:09 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:09.722 04:04:09 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:09.722 04:04:09 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:09.722 04:04:09 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:09.722 04:04:09 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:09.722 04:04:09 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.722 04:04:09 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.722 04:04:09 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.722 04:04:09 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.722 04:04:09 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.722 04:04:09 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.722 04:04:09 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.722 04:04:09 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.722 04:04:09 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.722 04:04:09 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.722 04:04:09 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.722 04:04:09 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:09.722 04:04:09 json_config -- scripts/common.sh@345 -- # : 1 00:06:09.722 04:04:09 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.722 04:04:09 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.722 04:04:09 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:09.722 04:04:09 json_config -- scripts/common.sh@353 -- # local d=1 00:06:09.722 04:04:09 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.722 04:04:09 json_config -- scripts/common.sh@355 -- # echo 1 00:06:09.722 04:04:09 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.722 04:04:09 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:09.722 04:04:09 json_config -- scripts/common.sh@353 -- # local d=2 00:06:09.722 04:04:09 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.722 04:04:09 json_config -- scripts/common.sh@355 -- # echo 2 00:06:09.722 04:04:09 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.722 04:04:09 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.722 04:04:09 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.722 04:04:09 json_config -- scripts/common.sh@368 -- # return 0 00:06:09.722 04:04:09 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.722 04:04:09 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:09.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.722 --rc genhtml_branch_coverage=1 00:06:09.722 --rc genhtml_function_coverage=1 00:06:09.722 --rc genhtml_legend=1 00:06:09.722 --rc geninfo_all_blocks=1 00:06:09.722 --rc geninfo_unexecuted_blocks=1 00:06:09.722 00:06:09.722 ' 00:06:09.722 04:04:09 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:09.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.722 --rc genhtml_branch_coverage=1 00:06:09.722 --rc genhtml_function_coverage=1 00:06:09.722 --rc genhtml_legend=1 00:06:09.722 --rc geninfo_all_blocks=1 00:06:09.722 --rc geninfo_unexecuted_blocks=1 00:06:09.722 00:06:09.722 ' 00:06:09.722 04:04:09 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:09.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.722 --rc genhtml_branch_coverage=1 00:06:09.722 --rc genhtml_function_coverage=1 00:06:09.722 --rc genhtml_legend=1 00:06:09.722 --rc geninfo_all_blocks=1 00:06:09.722 --rc geninfo_unexecuted_blocks=1 00:06:09.722 00:06:09.722 ' 00:06:09.722 04:04:09 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:09.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.722 --rc genhtml_branch_coverage=1 00:06:09.722 --rc genhtml_function_coverage=1 00:06:09.722 --rc genhtml_legend=1 00:06:09.722 --rc geninfo_all_blocks=1 00:06:09.722 --rc geninfo_unexecuted_blocks=1 00:06:09.722 00:06:09.722 ' 00:06:09.722 04:04:09 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:155a028a-f143-454f-b8f9-8f0e571b807d 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=155a028a-f143-454f-b8f9-8f0e571b807d 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:09.722 04:04:09 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:09.722 04:04:09 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.722 04:04:09 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.722 04:04:09 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.722 04:04:09 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.722 04:04:09 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.722 04:04:09 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.722 04:04:09 json_config -- paths/export.sh@5 -- # export PATH 00:06:09.722 04:04:09 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@51 -- # : 0 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:09.722 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:09.722 04:04:09 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:09.722 04:04:09 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:09.722 04:04:09 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:09.722 04:04:09 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:09.722 04:04:09 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:09.723 04:04:09 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:09.723 WARNING: No tests are enabled so not running JSON configuration tests 00:06:09.723 04:04:09 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:09.723 04:04:09 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:09.723 00:06:09.723 real 0m0.231s 00:06:09.723 user 0m0.138s 00:06:09.723 sys 0m0.103s 00:06:09.723 04:04:09 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.723 04:04:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.723 ************************************ 00:06:09.723 END TEST json_config 00:06:09.723 ************************************ 00:06:09.983 04:04:09 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:09.983 04:04:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.983 04:04:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.983 04:04:09 -- common/autotest_common.sh@10 -- # set +x 00:06:09.983 ************************************ 00:06:09.983 START TEST json_config_extra_key 00:06:09.983 ************************************ 00:06:09.983 04:04:09 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:09.983 04:04:09 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:09.983 04:04:09 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:09.983 04:04:09 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:09.983 04:04:09 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:09.983 04:04:09 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.983 04:04:09 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.983 04:04:09 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.983 04:04:09 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.983 04:04:09 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.983 04:04:09 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.983 04:04:09 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.983 04:04:09 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.983 04:04:09 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.983 04:04:09 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.983 04:04:09 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.983 04:04:09 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:09.983 04:04:09 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:09.983 04:04:09 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.983 04:04:09 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.983 04:04:09 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:09.983 04:04:09 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:09.983 04:04:09 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.983 04:04:09 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:09.983 04:04:09 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.983 04:04:09 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:09.983 04:04:09 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:09.983 04:04:09 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.983 04:04:09 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:09.983 04:04:09 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.984 04:04:09 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.984 04:04:09 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.984 04:04:09 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:09.984 04:04:09 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.984 04:04:09 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:09.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.984 --rc genhtml_branch_coverage=1 00:06:09.984 --rc genhtml_function_coverage=1 00:06:09.984 --rc genhtml_legend=1 00:06:09.984 --rc geninfo_all_blocks=1 00:06:09.984 --rc geninfo_unexecuted_blocks=1 00:06:09.984 00:06:09.984 ' 00:06:09.984 04:04:09 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:09.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.984 --rc genhtml_branch_coverage=1 00:06:09.984 --rc genhtml_function_coverage=1 00:06:09.984 --rc genhtml_legend=1 00:06:09.984 --rc geninfo_all_blocks=1 00:06:09.984 --rc geninfo_unexecuted_blocks=1 00:06:09.984 00:06:09.984 ' 00:06:09.984 04:04:09 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:09.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.984 --rc genhtml_branch_coverage=1 00:06:09.984 --rc genhtml_function_coverage=1 00:06:09.984 --rc genhtml_legend=1 00:06:09.984 --rc geninfo_all_blocks=1 00:06:09.984 --rc geninfo_unexecuted_blocks=1 00:06:09.984 00:06:09.984 ' 00:06:09.984 04:04:09 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:09.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.984 --rc genhtml_branch_coverage=1 00:06:09.984 --rc genhtml_function_coverage=1 00:06:09.984 --rc genhtml_legend=1 00:06:09.984 --rc geninfo_all_blocks=1 00:06:09.984 --rc geninfo_unexecuted_blocks=1 00:06:09.984 00:06:09.984 ' 00:06:09.984 04:04:09 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:09.984 04:04:09 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:09.984 04:04:09 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:09.984 04:04:09 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:09.984 04:04:09 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:09.984 04:04:09 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:09.984 04:04:09 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:09.984 04:04:09 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:09.984 04:04:09 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:09.984 04:04:09 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:09.984 04:04:09 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:09.984 04:04:09 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.247 04:04:09 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:155a028a-f143-454f-b8f9-8f0e571b807d 00:06:10.247 04:04:09 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=155a028a-f143-454f-b8f9-8f0e571b807d 00:06:10.247 04:04:09 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.247 04:04:09 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.247 04:04:09 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:10.247 04:04:09 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.247 04:04:09 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:10.247 04:04:09 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.247 04:04:09 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.247 04:04:09 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.247 04:04:09 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.247 04:04:09 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.247 04:04:09 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.247 04:04:09 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.247 04:04:09 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:10.247 04:04:09 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.247 04:04:09 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:10.247 04:04:09 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:10.247 04:04:09 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:10.247 04:04:09 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.247 04:04:09 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.247 04:04:09 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.247 04:04:09 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:10.247 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:10.247 04:04:09 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:10.247 04:04:09 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:10.247 04:04:09 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:10.247 04:04:09 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:10.247 04:04:09 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:10.247 04:04:09 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:10.247 04:04:09 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:10.247 04:04:09 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:10.247 04:04:09 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:10.247 04:04:09 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:10.247 04:04:09 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:10.247 04:04:09 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:10.247 04:04:09 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:10.247 INFO: launching applications... 00:06:10.247 04:04:09 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:10.247 04:04:09 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:10.247 04:04:09 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:10.247 04:04:09 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:10.247 04:04:09 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:10.247 04:04:09 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:10.247 04:04:09 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:10.247 04:04:09 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.247 04:04:09 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.247 04:04:09 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69636 00:06:10.247 Waiting for target to run... 00:06:10.247 04:04:09 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:10.247 04:04:09 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69636 /var/tmp/spdk_tgt.sock 00:06:10.247 04:04:09 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 69636 ']' 00:06:10.247 04:04:09 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:10.247 04:04:09 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:10.247 04:04:09 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:10.247 04:04:09 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:10.247 04:04:09 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.247 04:04:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:10.247 [2024-11-21 04:04:10.092295] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:10.247 [2024-11-21 04:04:10.092479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69636 ] 00:06:10.814 [2024-11-21 04:04:10.635842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.814 [2024-11-21 04:04:10.657690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.073 04:04:10 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.073 04:04:10 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:11.073 00:06:11.073 04:04:10 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:11.073 INFO: shutting down applications... 00:06:11.073 04:04:10 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:11.073 04:04:10 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:11.073 04:04:10 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:11.073 04:04:10 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:11.073 04:04:10 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69636 ]] 00:06:11.073 04:04:10 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69636 00:06:11.073 04:04:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:11.073 04:04:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.073 04:04:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69636 00:06:11.073 04:04:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:11.643 04:04:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:11.643 04:04:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.643 04:04:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69636 00:06:11.643 04:04:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:12.213 04:04:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:12.213 04:04:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.213 04:04:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69636 00:06:12.213 04:04:11 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:12.213 04:04:11 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:12.213 04:04:11 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:12.213 SPDK target shutdown done 00:06:12.213 04:04:11 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:12.213 Success 00:06:12.213 04:04:11 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:12.213 00:06:12.213 real 0m2.187s 00:06:12.213 user 0m1.470s 00:06:12.213 sys 0m0.708s 00:06:12.213 04:04:11 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.213 04:04:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:12.213 ************************************ 00:06:12.213 END TEST json_config_extra_key 00:06:12.213 ************************************ 00:06:12.213 04:04:12 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:12.213 04:04:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.213 04:04:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.213 04:04:12 -- common/autotest_common.sh@10 -- # set +x 00:06:12.213 ************************************ 00:06:12.213 START TEST alias_rpc 00:06:12.213 ************************************ 00:06:12.213 04:04:12 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:12.213 * Looking for test storage... 00:06:12.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:12.213 04:04:12 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.213 04:04:12 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.213 04:04:12 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.473 04:04:12 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.473 04:04:12 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:12.473 04:04:12 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.474 04:04:12 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.474 --rc genhtml_branch_coverage=1 00:06:12.474 --rc genhtml_function_coverage=1 00:06:12.474 --rc genhtml_legend=1 00:06:12.474 --rc geninfo_all_blocks=1 00:06:12.474 --rc geninfo_unexecuted_blocks=1 00:06:12.474 00:06:12.474 ' 00:06:12.474 04:04:12 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.474 --rc genhtml_branch_coverage=1 00:06:12.474 --rc genhtml_function_coverage=1 00:06:12.474 --rc genhtml_legend=1 00:06:12.474 --rc geninfo_all_blocks=1 00:06:12.474 --rc geninfo_unexecuted_blocks=1 00:06:12.474 00:06:12.474 ' 00:06:12.474 04:04:12 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.474 --rc genhtml_branch_coverage=1 00:06:12.474 --rc genhtml_function_coverage=1 00:06:12.474 --rc genhtml_legend=1 00:06:12.474 --rc geninfo_all_blocks=1 00:06:12.474 --rc geninfo_unexecuted_blocks=1 00:06:12.474 00:06:12.474 ' 00:06:12.474 04:04:12 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.474 --rc genhtml_branch_coverage=1 00:06:12.474 --rc genhtml_function_coverage=1 00:06:12.474 --rc genhtml_legend=1 00:06:12.474 --rc geninfo_all_blocks=1 00:06:12.474 --rc geninfo_unexecuted_blocks=1 00:06:12.474 00:06:12.474 ' 00:06:12.474 04:04:12 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:12.474 04:04:12 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69718 00:06:12.474 04:04:12 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.474 04:04:12 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69718 00:06:12.474 04:04:12 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 69718 ']' 00:06:12.474 04:04:12 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.474 04:04:12 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.474 04:04:12 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.474 04:04:12 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.474 04:04:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.474 [2024-11-21 04:04:12.337112] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:12.474 [2024-11-21 04:04:12.337239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69718 ] 00:06:12.734 [2024-11-21 04:04:12.493051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.734 [2024-11-21 04:04:12.530939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.302 04:04:13 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.302 04:04:13 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:13.302 04:04:13 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:13.562 04:04:13 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69718 00:06:13.562 04:04:13 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 69718 ']' 00:06:13.562 04:04:13 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 69718 00:06:13.562 04:04:13 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:13.562 04:04:13 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.562 04:04:13 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69718 00:06:13.562 04:04:13 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.562 04:04:13 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.562 killing process with pid 69718 00:06:13.562 04:04:13 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69718' 00:06:13.562 04:04:13 alias_rpc -- common/autotest_common.sh@973 -- # kill 69718 00:06:13.562 04:04:13 alias_rpc -- common/autotest_common.sh@978 -- # wait 69718 00:06:14.128 00:06:14.128 real 0m1.993s 00:06:14.129 user 0m1.872s 00:06:14.129 sys 0m0.638s 00:06:14.129 04:04:14 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.129 04:04:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.129 ************************************ 00:06:14.129 END TEST alias_rpc 00:06:14.129 ************************************ 00:06:14.129 04:04:14 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:14.129 04:04:14 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:14.129 04:04:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.129 04:04:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.129 04:04:14 -- common/autotest_common.sh@10 -- # set +x 00:06:14.129 ************************************ 00:06:14.129 START TEST spdkcli_tcp 00:06:14.129 ************************************ 00:06:14.129 04:04:14 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:14.388 * Looking for test storage... 00:06:14.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:14.388 04:04:14 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:14.388 04:04:14 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:14.388 04:04:14 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:14.388 04:04:14 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.388 04:04:14 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:14.388 04:04:14 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.388 04:04:14 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:14.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.388 --rc genhtml_branch_coverage=1 00:06:14.388 --rc genhtml_function_coverage=1 00:06:14.388 --rc genhtml_legend=1 00:06:14.388 --rc geninfo_all_blocks=1 00:06:14.388 --rc geninfo_unexecuted_blocks=1 00:06:14.388 00:06:14.388 ' 00:06:14.388 04:04:14 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:14.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.388 --rc genhtml_branch_coverage=1 00:06:14.388 --rc genhtml_function_coverage=1 00:06:14.388 --rc genhtml_legend=1 00:06:14.388 --rc geninfo_all_blocks=1 00:06:14.388 --rc geninfo_unexecuted_blocks=1 00:06:14.388 00:06:14.388 ' 00:06:14.388 04:04:14 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:14.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.388 --rc genhtml_branch_coverage=1 00:06:14.388 --rc genhtml_function_coverage=1 00:06:14.388 --rc genhtml_legend=1 00:06:14.388 --rc geninfo_all_blocks=1 00:06:14.388 --rc geninfo_unexecuted_blocks=1 00:06:14.388 00:06:14.388 ' 00:06:14.388 04:04:14 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:14.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.388 --rc genhtml_branch_coverage=1 00:06:14.388 --rc genhtml_function_coverage=1 00:06:14.388 --rc genhtml_legend=1 00:06:14.388 --rc geninfo_all_blocks=1 00:06:14.388 --rc geninfo_unexecuted_blocks=1 00:06:14.388 00:06:14.388 ' 00:06:14.388 04:04:14 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:14.388 04:04:14 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:14.388 04:04:14 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:14.388 04:04:14 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:14.388 04:04:14 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:14.388 04:04:14 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:14.388 04:04:14 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:14.388 04:04:14 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:14.388 04:04:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.388 04:04:14 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69805 00:06:14.388 04:04:14 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:14.388 04:04:14 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69805 00:06:14.388 04:04:14 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 69805 ']' 00:06:14.388 04:04:14 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.388 04:04:14 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.388 04:04:14 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.388 04:04:14 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.388 04:04:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.648 [2024-11-21 04:04:14.419766] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:14.648 [2024-11-21 04:04:14.419909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69805 ] 00:06:14.648 [2024-11-21 04:04:14.577096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.648 [2024-11-21 04:04:14.616734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.648 [2024-11-21 04:04:14.616810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.585 04:04:15 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.585 04:04:15 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:15.585 04:04:15 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=69819 00:06:15.585 04:04:15 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:15.585 04:04:15 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:15.585 [ 00:06:15.585 "bdev_malloc_delete", 00:06:15.585 "bdev_malloc_create", 00:06:15.585 "bdev_null_resize", 00:06:15.585 "bdev_null_delete", 00:06:15.585 "bdev_null_create", 00:06:15.585 "bdev_nvme_cuse_unregister", 00:06:15.585 "bdev_nvme_cuse_register", 00:06:15.585 "bdev_opal_new_user", 00:06:15.585 "bdev_opal_set_lock_state", 00:06:15.585 "bdev_opal_delete", 00:06:15.585 "bdev_opal_get_info", 00:06:15.585 "bdev_opal_create", 00:06:15.585 "bdev_nvme_opal_revert", 00:06:15.585 "bdev_nvme_opal_init", 00:06:15.585 "bdev_nvme_send_cmd", 00:06:15.585 "bdev_nvme_set_keys", 00:06:15.585 "bdev_nvme_get_path_iostat", 00:06:15.585 "bdev_nvme_get_mdns_discovery_info", 00:06:15.585 "bdev_nvme_stop_mdns_discovery", 00:06:15.585 "bdev_nvme_start_mdns_discovery", 00:06:15.585 "bdev_nvme_set_multipath_policy", 00:06:15.585 "bdev_nvme_set_preferred_path", 00:06:15.585 "bdev_nvme_get_io_paths", 00:06:15.585 "bdev_nvme_remove_error_injection", 00:06:15.585 "bdev_nvme_add_error_injection", 00:06:15.585 "bdev_nvme_get_discovery_info", 00:06:15.585 "bdev_nvme_stop_discovery", 00:06:15.585 "bdev_nvme_start_discovery", 00:06:15.585 "bdev_nvme_get_controller_health_info", 00:06:15.585 "bdev_nvme_disable_controller", 00:06:15.585 "bdev_nvme_enable_controller", 00:06:15.585 "bdev_nvme_reset_controller", 00:06:15.585 "bdev_nvme_get_transport_statistics", 00:06:15.585 "bdev_nvme_apply_firmware", 00:06:15.585 "bdev_nvme_detach_controller", 00:06:15.585 "bdev_nvme_get_controllers", 00:06:15.585 "bdev_nvme_attach_controller", 00:06:15.585 "bdev_nvme_set_hotplug", 00:06:15.585 "bdev_nvme_set_options", 00:06:15.585 "bdev_passthru_delete", 00:06:15.585 "bdev_passthru_create", 00:06:15.585 "bdev_lvol_set_parent_bdev", 00:06:15.585 "bdev_lvol_set_parent", 00:06:15.585 "bdev_lvol_check_shallow_copy", 00:06:15.585 "bdev_lvol_start_shallow_copy", 00:06:15.585 "bdev_lvol_grow_lvstore", 00:06:15.585 "bdev_lvol_get_lvols", 00:06:15.586 "bdev_lvol_get_lvstores", 00:06:15.586 "bdev_lvol_delete", 00:06:15.586 "bdev_lvol_set_read_only", 00:06:15.586 "bdev_lvol_resize", 00:06:15.586 "bdev_lvol_decouple_parent", 00:06:15.586 "bdev_lvol_inflate", 00:06:15.586 "bdev_lvol_rename", 00:06:15.586 "bdev_lvol_clone_bdev", 00:06:15.586 "bdev_lvol_clone", 00:06:15.586 "bdev_lvol_snapshot", 00:06:15.586 "bdev_lvol_create", 00:06:15.586 "bdev_lvol_delete_lvstore", 00:06:15.586 "bdev_lvol_rename_lvstore", 00:06:15.586 "bdev_lvol_create_lvstore", 00:06:15.586 "bdev_raid_set_options", 00:06:15.586 "bdev_raid_remove_base_bdev", 00:06:15.586 "bdev_raid_add_base_bdev", 00:06:15.586 "bdev_raid_delete", 00:06:15.586 "bdev_raid_create", 00:06:15.586 "bdev_raid_get_bdevs", 00:06:15.586 "bdev_error_inject_error", 00:06:15.586 "bdev_error_delete", 00:06:15.586 "bdev_error_create", 00:06:15.586 "bdev_split_delete", 00:06:15.586 "bdev_split_create", 00:06:15.586 "bdev_delay_delete", 00:06:15.586 "bdev_delay_create", 00:06:15.586 "bdev_delay_update_latency", 00:06:15.586 "bdev_zone_block_delete", 00:06:15.586 "bdev_zone_block_create", 00:06:15.586 "blobfs_create", 00:06:15.586 "blobfs_detect", 00:06:15.586 "blobfs_set_cache_size", 00:06:15.586 "bdev_aio_delete", 00:06:15.586 "bdev_aio_rescan", 00:06:15.586 "bdev_aio_create", 00:06:15.586 "bdev_ftl_set_property", 00:06:15.586 "bdev_ftl_get_properties", 00:06:15.586 "bdev_ftl_get_stats", 00:06:15.586 "bdev_ftl_unmap", 00:06:15.586 "bdev_ftl_unload", 00:06:15.586 "bdev_ftl_delete", 00:06:15.586 "bdev_ftl_load", 00:06:15.586 "bdev_ftl_create", 00:06:15.586 "bdev_virtio_attach_controller", 00:06:15.586 "bdev_virtio_scsi_get_devices", 00:06:15.586 "bdev_virtio_detach_controller", 00:06:15.586 "bdev_virtio_blk_set_hotplug", 00:06:15.586 "bdev_iscsi_delete", 00:06:15.586 "bdev_iscsi_create", 00:06:15.586 "bdev_iscsi_set_options", 00:06:15.586 "accel_error_inject_error", 00:06:15.586 "ioat_scan_accel_module", 00:06:15.586 "dsa_scan_accel_module", 00:06:15.586 "iaa_scan_accel_module", 00:06:15.586 "keyring_file_remove_key", 00:06:15.586 "keyring_file_add_key", 00:06:15.586 "keyring_linux_set_options", 00:06:15.586 "fsdev_aio_delete", 00:06:15.586 "fsdev_aio_create", 00:06:15.586 "iscsi_get_histogram", 00:06:15.586 "iscsi_enable_histogram", 00:06:15.586 "iscsi_set_options", 00:06:15.586 "iscsi_get_auth_groups", 00:06:15.586 "iscsi_auth_group_remove_secret", 00:06:15.586 "iscsi_auth_group_add_secret", 00:06:15.586 "iscsi_delete_auth_group", 00:06:15.586 "iscsi_create_auth_group", 00:06:15.586 "iscsi_set_discovery_auth", 00:06:15.586 "iscsi_get_options", 00:06:15.586 "iscsi_target_node_request_logout", 00:06:15.586 "iscsi_target_node_set_redirect", 00:06:15.586 "iscsi_target_node_set_auth", 00:06:15.586 "iscsi_target_node_add_lun", 00:06:15.586 "iscsi_get_stats", 00:06:15.586 "iscsi_get_connections", 00:06:15.586 "iscsi_portal_group_set_auth", 00:06:15.586 "iscsi_start_portal_group", 00:06:15.586 "iscsi_delete_portal_group", 00:06:15.586 "iscsi_create_portal_group", 00:06:15.586 "iscsi_get_portal_groups", 00:06:15.586 "iscsi_delete_target_node", 00:06:15.586 "iscsi_target_node_remove_pg_ig_maps", 00:06:15.586 "iscsi_target_node_add_pg_ig_maps", 00:06:15.586 "iscsi_create_target_node", 00:06:15.586 "iscsi_get_target_nodes", 00:06:15.586 "iscsi_delete_initiator_group", 00:06:15.586 "iscsi_initiator_group_remove_initiators", 00:06:15.586 "iscsi_initiator_group_add_initiators", 00:06:15.586 "iscsi_create_initiator_group", 00:06:15.586 "iscsi_get_initiator_groups", 00:06:15.586 "nvmf_set_crdt", 00:06:15.586 "nvmf_set_config", 00:06:15.586 "nvmf_set_max_subsystems", 00:06:15.586 "nvmf_stop_mdns_prr", 00:06:15.586 "nvmf_publish_mdns_prr", 00:06:15.586 "nvmf_subsystem_get_listeners", 00:06:15.586 "nvmf_subsystem_get_qpairs", 00:06:15.586 "nvmf_subsystem_get_controllers", 00:06:15.586 "nvmf_get_stats", 00:06:15.586 "nvmf_get_transports", 00:06:15.586 "nvmf_create_transport", 00:06:15.586 "nvmf_get_targets", 00:06:15.586 "nvmf_delete_target", 00:06:15.586 "nvmf_create_target", 00:06:15.586 "nvmf_subsystem_allow_any_host", 00:06:15.586 "nvmf_subsystem_set_keys", 00:06:15.586 "nvmf_subsystem_remove_host", 00:06:15.586 "nvmf_subsystem_add_host", 00:06:15.586 "nvmf_ns_remove_host", 00:06:15.586 "nvmf_ns_add_host", 00:06:15.586 "nvmf_subsystem_remove_ns", 00:06:15.586 "nvmf_subsystem_set_ns_ana_group", 00:06:15.586 "nvmf_subsystem_add_ns", 00:06:15.586 "nvmf_subsystem_listener_set_ana_state", 00:06:15.586 "nvmf_discovery_get_referrals", 00:06:15.586 "nvmf_discovery_remove_referral", 00:06:15.586 "nvmf_discovery_add_referral", 00:06:15.586 "nvmf_subsystem_remove_listener", 00:06:15.586 "nvmf_subsystem_add_listener", 00:06:15.586 "nvmf_delete_subsystem", 00:06:15.586 "nvmf_create_subsystem", 00:06:15.586 "nvmf_get_subsystems", 00:06:15.586 "env_dpdk_get_mem_stats", 00:06:15.586 "nbd_get_disks", 00:06:15.586 "nbd_stop_disk", 00:06:15.586 "nbd_start_disk", 00:06:15.586 "ublk_recover_disk", 00:06:15.586 "ublk_get_disks", 00:06:15.586 "ublk_stop_disk", 00:06:15.586 "ublk_start_disk", 00:06:15.586 "ublk_destroy_target", 00:06:15.586 "ublk_create_target", 00:06:15.586 "virtio_blk_create_transport", 00:06:15.586 "virtio_blk_get_transports", 00:06:15.586 "vhost_controller_set_coalescing", 00:06:15.586 "vhost_get_controllers", 00:06:15.586 "vhost_delete_controller", 00:06:15.586 "vhost_create_blk_controller", 00:06:15.586 "vhost_scsi_controller_remove_target", 00:06:15.586 "vhost_scsi_controller_add_target", 00:06:15.586 "vhost_start_scsi_controller", 00:06:15.586 "vhost_create_scsi_controller", 00:06:15.586 "thread_set_cpumask", 00:06:15.586 "scheduler_set_options", 00:06:15.586 "framework_get_governor", 00:06:15.586 "framework_get_scheduler", 00:06:15.586 "framework_set_scheduler", 00:06:15.586 "framework_get_reactors", 00:06:15.586 "thread_get_io_channels", 00:06:15.586 "thread_get_pollers", 00:06:15.586 "thread_get_stats", 00:06:15.586 "framework_monitor_context_switch", 00:06:15.586 "spdk_kill_instance", 00:06:15.586 "log_enable_timestamps", 00:06:15.586 "log_get_flags", 00:06:15.586 "log_clear_flag", 00:06:15.586 "log_set_flag", 00:06:15.586 "log_get_level", 00:06:15.586 "log_set_level", 00:06:15.586 "log_get_print_level", 00:06:15.586 "log_set_print_level", 00:06:15.586 "framework_enable_cpumask_locks", 00:06:15.586 "framework_disable_cpumask_locks", 00:06:15.586 "framework_wait_init", 00:06:15.586 "framework_start_init", 00:06:15.586 "scsi_get_devices", 00:06:15.586 "bdev_get_histogram", 00:06:15.586 "bdev_enable_histogram", 00:06:15.586 "bdev_set_qos_limit", 00:06:15.586 "bdev_set_qd_sampling_period", 00:06:15.586 "bdev_get_bdevs", 00:06:15.586 "bdev_reset_iostat", 00:06:15.586 "bdev_get_iostat", 00:06:15.586 "bdev_examine", 00:06:15.586 "bdev_wait_for_examine", 00:06:15.586 "bdev_set_options", 00:06:15.586 "accel_get_stats", 00:06:15.586 "accel_set_options", 00:06:15.586 "accel_set_driver", 00:06:15.586 "accel_crypto_key_destroy", 00:06:15.586 "accel_crypto_keys_get", 00:06:15.586 "accel_crypto_key_create", 00:06:15.586 "accel_assign_opc", 00:06:15.586 "accel_get_module_info", 00:06:15.586 "accel_get_opc_assignments", 00:06:15.586 "vmd_rescan", 00:06:15.586 "vmd_remove_device", 00:06:15.586 "vmd_enable", 00:06:15.586 "sock_get_default_impl", 00:06:15.586 "sock_set_default_impl", 00:06:15.586 "sock_impl_set_options", 00:06:15.586 "sock_impl_get_options", 00:06:15.586 "iobuf_get_stats", 00:06:15.586 "iobuf_set_options", 00:06:15.586 "keyring_get_keys", 00:06:15.586 "framework_get_pci_devices", 00:06:15.586 "framework_get_config", 00:06:15.586 "framework_get_subsystems", 00:06:15.586 "fsdev_set_opts", 00:06:15.586 "fsdev_get_opts", 00:06:15.586 "trace_get_info", 00:06:15.586 "trace_get_tpoint_group_mask", 00:06:15.586 "trace_disable_tpoint_group", 00:06:15.586 "trace_enable_tpoint_group", 00:06:15.586 "trace_clear_tpoint_mask", 00:06:15.586 "trace_set_tpoint_mask", 00:06:15.586 "notify_get_notifications", 00:06:15.586 "notify_get_types", 00:06:15.586 "spdk_get_version", 00:06:15.586 "rpc_get_methods" 00:06:15.586 ] 00:06:15.586 04:04:15 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:15.586 04:04:15 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:15.586 04:04:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.586 04:04:15 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:15.586 04:04:15 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69805 00:06:15.586 04:04:15 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 69805 ']' 00:06:15.586 04:04:15 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 69805 00:06:15.586 04:04:15 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:15.586 04:04:15 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.586 04:04:15 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69805 00:06:15.586 04:04:15 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.586 04:04:15 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.586 04:04:15 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69805' 00:06:15.586 killing process with pid 69805 00:06:15.586 04:04:15 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 69805 00:06:15.586 04:04:15 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 69805 00:06:16.525 00:06:16.525 real 0m2.051s 00:06:16.525 user 0m3.284s 00:06:16.525 sys 0m0.739s 00:06:16.525 04:04:16 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.525 04:04:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.525 ************************************ 00:06:16.525 END TEST spdkcli_tcp 00:06:16.525 ************************************ 00:06:16.525 04:04:16 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:16.525 04:04:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.525 04:04:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.525 04:04:16 -- common/autotest_common.sh@10 -- # set +x 00:06:16.525 ************************************ 00:06:16.525 START TEST dpdk_mem_utility 00:06:16.525 ************************************ 00:06:16.525 04:04:16 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:16.525 * Looking for test storage... 00:06:16.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:16.525 04:04:16 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.525 04:04:16 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.525 04:04:16 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:16.525 04:04:16 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:16.525 04:04:16 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.525 04:04:16 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.525 04:04:16 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.525 04:04:16 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.525 04:04:16 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.525 04:04:16 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.525 04:04:16 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.526 04:04:16 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.526 04:04:16 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.526 04:04:16 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.526 04:04:16 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.526 04:04:16 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:16.526 04:04:16 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:16.526 04:04:16 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.526 04:04:16 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.526 04:04:16 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:16.526 04:04:16 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:16.526 04:04:16 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.526 04:04:16 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:16.526 04:04:16 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.526 04:04:16 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:16.526 04:04:16 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:16.526 04:04:16 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.526 04:04:16 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:16.526 04:04:16 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.526 04:04:16 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.526 04:04:16 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.526 04:04:16 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:16.526 04:04:16 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.526 04:04:16 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:16.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.526 --rc genhtml_branch_coverage=1 00:06:16.526 --rc genhtml_function_coverage=1 00:06:16.526 --rc genhtml_legend=1 00:06:16.526 --rc geninfo_all_blocks=1 00:06:16.526 --rc geninfo_unexecuted_blocks=1 00:06:16.526 00:06:16.526 ' 00:06:16.526 04:04:16 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:16.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.526 --rc genhtml_branch_coverage=1 00:06:16.526 --rc genhtml_function_coverage=1 00:06:16.526 --rc genhtml_legend=1 00:06:16.526 --rc geninfo_all_blocks=1 00:06:16.526 --rc geninfo_unexecuted_blocks=1 00:06:16.526 00:06:16.526 ' 00:06:16.526 04:04:16 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:16.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.526 --rc genhtml_branch_coverage=1 00:06:16.526 --rc genhtml_function_coverage=1 00:06:16.526 --rc genhtml_legend=1 00:06:16.526 --rc geninfo_all_blocks=1 00:06:16.526 --rc geninfo_unexecuted_blocks=1 00:06:16.526 00:06:16.526 ' 00:06:16.526 04:04:16 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:16.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.526 --rc genhtml_branch_coverage=1 00:06:16.526 --rc genhtml_function_coverage=1 00:06:16.526 --rc genhtml_legend=1 00:06:16.526 --rc geninfo_all_blocks=1 00:06:16.526 --rc geninfo_unexecuted_blocks=1 00:06:16.526 00:06:16.526 ' 00:06:16.526 04:04:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:16.526 04:04:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=69908 00:06:16.526 04:04:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:16.526 04:04:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 69908 00:06:16.526 04:04:16 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 69908 ']' 00:06:16.526 04:04:16 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.526 04:04:16 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.526 04:04:16 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.526 04:04:16 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.526 04:04:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:16.785 [2024-11-21 04:04:16.531127] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:16.785 [2024-11-21 04:04:16.531351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69908 ] 00:06:16.785 [2024-11-21 04:04:16.688962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.785 [2024-11-21 04:04:16.727940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.728 04:04:17 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.728 04:04:17 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:17.728 04:04:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:17.728 04:04:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:17.728 04:04:17 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.728 04:04:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:17.728 { 00:06:17.728 "filename": "/tmp/spdk_mem_dump.txt" 00:06:17.728 } 00:06:17.728 04:04:17 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.728 04:04:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:17.728 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:17.728 1 heaps totaling size 810.000000 MiB 00:06:17.728 size: 810.000000 MiB heap id: 0 00:06:17.728 end heaps---------- 00:06:17.728 9 mempools totaling size 595.772034 MiB 00:06:17.728 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:17.728 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:17.728 size: 92.545471 MiB name: bdev_io_69908 00:06:17.728 size: 50.003479 MiB name: msgpool_69908 00:06:17.728 size: 36.509338 MiB name: fsdev_io_69908 00:06:17.728 size: 21.763794 MiB name: PDU_Pool 00:06:17.728 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:17.728 size: 4.133484 MiB name: evtpool_69908 00:06:17.728 size: 0.026123 MiB name: Session_Pool 00:06:17.728 end mempools------- 00:06:17.728 6 memzones totaling size 4.142822 MiB 00:06:17.728 size: 1.000366 MiB name: RG_ring_0_69908 00:06:17.728 size: 1.000366 MiB name: RG_ring_1_69908 00:06:17.728 size: 1.000366 MiB name: RG_ring_4_69908 00:06:17.728 size: 1.000366 MiB name: RG_ring_5_69908 00:06:17.728 size: 0.125366 MiB name: RG_ring_2_69908 00:06:17.728 size: 0.015991 MiB name: RG_ring_3_69908 00:06:17.728 end memzones------- 00:06:17.728 04:04:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:17.728 heap id: 0 total size: 810.000000 MiB number of busy elements: 294 number of free elements: 15 00:06:17.728 list of free elements. size: 10.816711 MiB 00:06:17.728 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:17.728 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:17.728 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:17.728 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:17.728 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:17.728 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:17.728 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:17.728 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:17.728 element at address: 0x20001a600000 with size: 0.570984 MiB 00:06:17.728 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:17.728 element at address: 0x200000c00000 with size: 0.487000 MiB 00:06:17.728 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:17.728 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:17.728 element at address: 0x200027a00000 with size: 0.395935 MiB 00:06:17.728 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:17.728 list of standard malloc elements. size: 199.264404 MiB 00:06:17.728 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:17.728 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:17.728 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:17.728 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:17.728 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:17.729 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:17.729 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:17.729 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:17.729 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:17.729 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:17.729 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:17.729 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20001a692380 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20001a692440 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20001a692500 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20001a692680 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20001a692740 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20001a692800 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20001a692980 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:06:17.729 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a693040 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a693100 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a693280 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a693340 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a693400 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a693580 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a693640 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a693700 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a693880 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a693940 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a694000 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a694180 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a694240 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a694300 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a694480 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a694540 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a694600 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a694780 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a694840 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a694900 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a695080 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a695140 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a695200 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:17.730 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a65680 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6c280 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:17.730 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:17.730 list of memzone associated elements. size: 599.918884 MiB 00:06:17.730 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:17.730 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:17.730 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:17.730 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:17.730 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:17.730 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_69908_0 00:06:17.730 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:17.731 associated memzone info: size: 48.002930 MiB name: MP_msgpool_69908_0 00:06:17.731 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:17.731 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_69908_0 00:06:17.731 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:17.731 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:17.731 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:17.731 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:17.731 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:17.731 associated memzone info: size: 3.000122 MiB name: MP_evtpool_69908_0 00:06:17.731 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:17.731 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_69908 00:06:17.731 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:17.731 associated memzone info: size: 1.007996 MiB name: MP_evtpool_69908 00:06:17.731 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:17.731 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:17.731 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:17.731 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:17.731 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:17.731 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:17.731 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:17.731 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:17.731 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:17.731 associated memzone info: size: 1.000366 MiB name: RG_ring_0_69908 00:06:17.731 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:17.731 associated memzone info: size: 1.000366 MiB name: RG_ring_1_69908 00:06:17.731 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:17.731 associated memzone info: size: 1.000366 MiB name: RG_ring_4_69908 00:06:17.731 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:17.731 associated memzone info: size: 1.000366 MiB name: RG_ring_5_69908 00:06:17.731 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:17.731 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_69908 00:06:17.731 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:17.731 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_69908 00:06:17.731 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:17.731 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:17.731 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:17.731 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:17.731 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:17.731 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:17.731 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:17.731 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_69908 00:06:17.731 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:17.731 associated memzone info: size: 0.125366 MiB name: RG_ring_2_69908 00:06:17.731 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:17.731 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:17.731 element at address: 0x200027a65740 with size: 0.023743 MiB 00:06:17.731 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:17.731 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:17.731 associated memzone info: size: 0.015991 MiB name: RG_ring_3_69908 00:06:17.731 element at address: 0x200027a6b880 with size: 0.002441 MiB 00:06:17.731 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:17.731 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:17.731 associated memzone info: size: 0.000183 MiB name: MP_msgpool_69908 00:06:17.731 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:17.731 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_69908 00:06:17.731 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:17.731 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_69908 00:06:17.731 element at address: 0x200027a6c340 with size: 0.000305 MiB 00:06:17.731 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:17.731 04:04:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:17.731 04:04:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 69908 00:06:17.731 04:04:17 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 69908 ']' 00:06:17.731 04:04:17 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 69908 00:06:17.731 04:04:17 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:17.731 04:04:17 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.731 04:04:17 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69908 00:06:17.731 04:04:17 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.731 04:04:17 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.731 killing process with pid 69908 00:06:17.731 04:04:17 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69908' 00:06:17.731 04:04:17 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 69908 00:06:17.731 04:04:17 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 69908 00:06:18.313 00:06:18.313 real 0m1.878s 00:06:18.313 user 0m1.644s 00:06:18.313 sys 0m0.662s 00:06:18.313 04:04:18 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.313 04:04:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:18.313 ************************************ 00:06:18.313 END TEST dpdk_mem_utility 00:06:18.313 ************************************ 00:06:18.313 04:04:18 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:18.313 04:04:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.313 04:04:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.313 04:04:18 -- common/autotest_common.sh@10 -- # set +x 00:06:18.313 ************************************ 00:06:18.313 START TEST event 00:06:18.313 ************************************ 00:06:18.313 04:04:18 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:18.313 * Looking for test storage... 00:06:18.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:18.313 04:04:18 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.313 04:04:18 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.313 04:04:18 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.574 04:04:18 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.574 04:04:18 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.574 04:04:18 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.574 04:04:18 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.574 04:04:18 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.574 04:04:18 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.574 04:04:18 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.574 04:04:18 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.574 04:04:18 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.574 04:04:18 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.574 04:04:18 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.574 04:04:18 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.574 04:04:18 event -- scripts/common.sh@344 -- # case "$op" in 00:06:18.574 04:04:18 event -- scripts/common.sh@345 -- # : 1 00:06:18.574 04:04:18 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.574 04:04:18 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.574 04:04:18 event -- scripts/common.sh@365 -- # decimal 1 00:06:18.574 04:04:18 event -- scripts/common.sh@353 -- # local d=1 00:06:18.574 04:04:18 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.574 04:04:18 event -- scripts/common.sh@355 -- # echo 1 00:06:18.574 04:04:18 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.574 04:04:18 event -- scripts/common.sh@366 -- # decimal 2 00:06:18.574 04:04:18 event -- scripts/common.sh@353 -- # local d=2 00:06:18.574 04:04:18 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.574 04:04:18 event -- scripts/common.sh@355 -- # echo 2 00:06:18.574 04:04:18 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.574 04:04:18 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.574 04:04:18 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.574 04:04:18 event -- scripts/common.sh@368 -- # return 0 00:06:18.574 04:04:18 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.574 04:04:18 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.574 --rc genhtml_branch_coverage=1 00:06:18.574 --rc genhtml_function_coverage=1 00:06:18.574 --rc genhtml_legend=1 00:06:18.574 --rc geninfo_all_blocks=1 00:06:18.574 --rc geninfo_unexecuted_blocks=1 00:06:18.574 00:06:18.574 ' 00:06:18.574 04:04:18 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.574 --rc genhtml_branch_coverage=1 00:06:18.574 --rc genhtml_function_coverage=1 00:06:18.574 --rc genhtml_legend=1 00:06:18.574 --rc geninfo_all_blocks=1 00:06:18.574 --rc geninfo_unexecuted_blocks=1 00:06:18.574 00:06:18.574 ' 00:06:18.574 04:04:18 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.574 --rc genhtml_branch_coverage=1 00:06:18.574 --rc genhtml_function_coverage=1 00:06:18.574 --rc genhtml_legend=1 00:06:18.574 --rc geninfo_all_blocks=1 00:06:18.574 --rc geninfo_unexecuted_blocks=1 00:06:18.574 00:06:18.574 ' 00:06:18.574 04:04:18 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.574 --rc genhtml_branch_coverage=1 00:06:18.574 --rc genhtml_function_coverage=1 00:06:18.574 --rc genhtml_legend=1 00:06:18.574 --rc geninfo_all_blocks=1 00:06:18.574 --rc geninfo_unexecuted_blocks=1 00:06:18.574 00:06:18.574 ' 00:06:18.574 04:04:18 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:18.574 04:04:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:18.574 04:04:18 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:18.574 04:04:18 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:18.574 04:04:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.574 04:04:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.574 ************************************ 00:06:18.574 START TEST event_perf 00:06:18.574 ************************************ 00:06:18.574 04:04:18 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:18.574 Running I/O for 1 seconds...[2024-11-21 04:04:18.440566] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:18.574 [2024-11-21 04:04:18.440715] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69988 ] 00:06:18.833 [2024-11-21 04:04:18.597580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:18.833 [2024-11-21 04:04:18.643046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.833 [2024-11-21 04:04:18.643275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.833 Running I/O for 1 seconds...[2024-11-21 04:04:18.646286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.833 [2024-11-21 04:04:18.646391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.770 00:06:19.770 lcore 0: 211490 00:06:19.770 lcore 1: 211491 00:06:19.770 lcore 2: 211490 00:06:19.770 lcore 3: 211490 00:06:19.770 done. 00:06:19.770 00:06:19.770 real 0m1.343s 00:06:19.770 user 0m4.105s 00:06:19.770 sys 0m0.117s 00:06:19.770 04:04:19 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.770 04:04:19 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:19.770 ************************************ 00:06:19.770 END TEST event_perf 00:06:19.770 ************************************ 00:06:20.029 04:04:19 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:20.029 04:04:19 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:20.029 04:04:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.029 04:04:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.029 ************************************ 00:06:20.029 START TEST event_reactor 00:06:20.029 ************************************ 00:06:20.029 04:04:19 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:20.029 [2024-11-21 04:04:19.852089] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:20.030 [2024-11-21 04:04:19.852342] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70033 ] 00:06:20.289 [2024-11-21 04:04:20.008036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.289 [2024-11-21 04:04:20.046710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.231 test_start 00:06:21.231 oneshot 00:06:21.231 tick 100 00:06:21.231 tick 100 00:06:21.231 tick 250 00:06:21.231 tick 100 00:06:21.231 tick 100 00:06:21.231 tick 100 00:06:21.231 tick 250 00:06:21.231 tick 500 00:06:21.231 tick 100 00:06:21.231 tick 100 00:06:21.231 tick 250 00:06:21.231 tick 100 00:06:21.231 tick 100 00:06:21.231 test_end 00:06:21.231 00:06:21.231 real 0m1.321s 00:06:21.231 user 0m1.121s 00:06:21.231 sys 0m0.092s 00:06:21.231 04:04:21 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.231 04:04:21 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:21.231 ************************************ 00:06:21.231 END TEST event_reactor 00:06:21.231 ************************************ 00:06:21.231 04:04:21 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:21.231 04:04:21 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:21.231 04:04:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.231 04:04:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.231 ************************************ 00:06:21.231 START TEST event_reactor_perf 00:06:21.231 ************************************ 00:06:21.231 04:04:21 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:21.490 [2024-11-21 04:04:21.242669] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:21.490 [2024-11-21 04:04:21.242813] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70064 ] 00:06:21.490 [2024-11-21 04:04:21.399537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.490 [2024-11-21 04:04:21.439357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.872 test_start 00:06:22.872 test_end 00:06:22.872 Performance: 404725 events per second 00:06:22.872 00:06:22.872 real 0m1.316s 00:06:22.872 user 0m1.127s 00:06:22.872 sys 0m0.082s 00:06:22.872 04:04:22 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.872 04:04:22 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:22.872 ************************************ 00:06:22.872 END TEST event_reactor_perf 00:06:22.872 ************************************ 00:06:22.872 04:04:22 event -- event/event.sh@49 -- # uname -s 00:06:22.872 04:04:22 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:22.872 04:04:22 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:22.872 04:04:22 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.872 04:04:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.872 04:04:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.872 ************************************ 00:06:22.872 START TEST event_scheduler 00:06:22.872 ************************************ 00:06:22.872 04:04:22 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:22.872 * Looking for test storage... 00:06:22.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:22.872 04:04:22 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:22.872 04:04:22 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:22.872 04:04:22 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:22.872 04:04:22 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.872 04:04:22 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:22.873 04:04:22 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.873 04:04:22 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:22.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.873 --rc genhtml_branch_coverage=1 00:06:22.873 --rc genhtml_function_coverage=1 00:06:22.873 --rc genhtml_legend=1 00:06:22.873 --rc geninfo_all_blocks=1 00:06:22.873 --rc geninfo_unexecuted_blocks=1 00:06:22.873 00:06:22.873 ' 00:06:22.873 04:04:22 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:22.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.873 --rc genhtml_branch_coverage=1 00:06:22.873 --rc genhtml_function_coverage=1 00:06:22.873 --rc genhtml_legend=1 00:06:22.873 --rc geninfo_all_blocks=1 00:06:22.873 --rc geninfo_unexecuted_blocks=1 00:06:22.873 00:06:22.873 ' 00:06:22.873 04:04:22 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:22.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.873 --rc genhtml_branch_coverage=1 00:06:22.873 --rc genhtml_function_coverage=1 00:06:22.873 --rc genhtml_legend=1 00:06:22.873 --rc geninfo_all_blocks=1 00:06:22.873 --rc geninfo_unexecuted_blocks=1 00:06:22.873 00:06:22.873 ' 00:06:22.873 04:04:22 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:22.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.873 --rc genhtml_branch_coverage=1 00:06:22.873 --rc genhtml_function_coverage=1 00:06:22.873 --rc genhtml_legend=1 00:06:22.873 --rc geninfo_all_blocks=1 00:06:22.873 --rc geninfo_unexecuted_blocks=1 00:06:22.873 00:06:22.873 ' 00:06:22.873 04:04:22 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:22.873 04:04:22 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70135 00:06:22.873 04:04:22 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:22.873 04:04:22 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.873 04:04:22 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70135 00:06:22.873 04:04:22 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 70135 ']' 00:06:22.873 04:04:22 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.873 04:04:22 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.873 04:04:22 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.873 04:04:22 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.873 04:04:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.134 [2024-11-21 04:04:22.914498] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:23.134 [2024-11-21 04:04:22.914657] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70135 ] 00:06:23.134 [2024-11-21 04:04:23.077157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.394 [2024-11-21 04:04:23.121957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.394 [2024-11-21 04:04:23.122118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.394 [2024-11-21 04:04:23.122374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.394 [2024-11-21 04:04:23.122421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.965 04:04:23 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.965 04:04:23 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:23.965 04:04:23 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:23.965 04:04:23 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.965 04:04:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.965 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:23.965 POWER: Cannot set governor of lcore 0 to userspace 00:06:23.965 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:23.965 POWER: Cannot set governor of lcore 0 to performance 00:06:23.965 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:23.965 POWER: Cannot set governor of lcore 0 to userspace 00:06:23.965 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:23.965 POWER: Unable to set Power Management Environment for lcore 0 00:06:23.965 [2024-11-21 04:04:23.758974] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:23.965 [2024-11-21 04:04:23.759013] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:23.965 [2024-11-21 04:04:23.759053] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:23.965 [2024-11-21 04:04:23.759096] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:23.965 [2024-11-21 04:04:23.759105] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:23.965 [2024-11-21 04:04:23.759116] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:23.965 04:04:23 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.965 04:04:23 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:23.965 04:04:23 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.965 04:04:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.965 [2024-11-21 04:04:23.883184] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:23.965 04:04:23 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.965 04:04:23 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:23.965 04:04:23 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.965 04:04:23 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.965 04:04:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.965 ************************************ 00:06:23.965 START TEST scheduler_create_thread 00:06:23.965 ************************************ 00:06:23.965 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:23.965 04:04:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:23.965 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.965 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.965 2 00:06:23.965 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.965 04:04:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:23.965 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.965 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.965 3 00:06:23.965 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.965 04:04:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:23.965 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.965 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.965 4 00:06:23.965 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.966 04:04:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:23.966 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.226 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.226 5 00:06:24.226 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.226 04:04:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:24.226 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.226 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.226 6 00:06:24.226 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.226 04:04:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:24.226 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.226 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.226 7 00:06:24.226 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.226 04:04:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:24.226 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.226 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.226 8 00:06:24.226 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.226 04:04:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:24.226 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.226 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.226 9 00:06:24.226 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.226 04:04:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:24.226 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.226 04:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.486 10 00:06:24.486 04:04:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.486 04:04:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:24.486 04:04:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.486 04:04:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.868 04:04:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.868 04:04:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:25.868 04:04:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:25.868 04:04:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.868 04:04:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.807 04:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.807 04:04:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:26.807 04:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.807 04:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.376 04:04:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.376 04:04:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:27.636 04:04:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:27.636 04:04:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.636 04:04:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.204 04:04:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.204 00:06:28.204 real 0m4.208s 00:06:28.204 user 0m0.027s 00:06:28.204 sys 0m0.011s 00:06:28.204 04:04:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.204 04:04:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.204 ************************************ 00:06:28.204 END TEST scheduler_create_thread 00:06:28.204 ************************************ 00:06:28.204 04:04:28 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:28.204 04:04:28 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70135 00:06:28.204 04:04:28 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 70135 ']' 00:06:28.204 04:04:28 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 70135 00:06:28.204 04:04:28 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:28.204 04:04:28 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.204 04:04:28 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70135 00:06:28.463 04:04:28 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:28.463 04:04:28 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:28.463 killing process with pid 70135 00:06:28.463 04:04:28 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70135' 00:06:28.463 04:04:28 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 70135 00:06:28.463 04:04:28 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 70135 00:06:28.463 [2024-11-21 04:04:28.384466] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:29.031 00:06:29.031 real 0m6.170s 00:06:29.031 user 0m13.272s 00:06:29.031 sys 0m0.583s 00:06:29.031 04:04:28 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.031 04:04:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:29.031 ************************************ 00:06:29.031 END TEST event_scheduler 00:06:29.031 ************************************ 00:06:29.031 04:04:28 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:29.031 04:04:28 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:29.031 04:04:28 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.031 04:04:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.031 04:04:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.031 ************************************ 00:06:29.031 START TEST app_repeat 00:06:29.031 ************************************ 00:06:29.031 04:04:28 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:29.031 04:04:28 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.031 04:04:28 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.031 04:04:28 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:29.031 04:04:28 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.031 04:04:28 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:29.031 04:04:28 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:29.031 04:04:28 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:29.031 04:04:28 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70252 00:06:29.031 04:04:28 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:29.031 04:04:28 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:29.031 Process app_repeat pid: 70252 00:06:29.031 04:04:28 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70252' 00:06:29.031 04:04:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:29.031 spdk_app_start Round 0 00:06:29.031 04:04:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:29.031 04:04:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70252 /var/tmp/spdk-nbd.sock 00:06:29.032 04:04:28 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70252 ']' 00:06:29.032 04:04:28 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:29.032 04:04:28 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:29.032 04:04:28 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:29.032 04:04:28 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.032 04:04:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.032 [2024-11-21 04:04:28.901352] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:29.032 [2024-11-21 04:04:28.901478] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70252 ] 00:06:29.290 [2024-11-21 04:04:29.055383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.290 [2024-11-21 04:04:29.095454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.290 [2024-11-21 04:04:29.095547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.856 04:04:29 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.856 04:04:29 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:29.856 04:04:29 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.114 Malloc0 00:06:30.114 04:04:29 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.373 Malloc1 00:06:30.373 04:04:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.373 04:04:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.373 04:04:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.373 04:04:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:30.373 04:04:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.373 04:04:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:30.373 04:04:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.373 04:04:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.373 04:04:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.373 04:04:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:30.373 04:04:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.373 04:04:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:30.373 04:04:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:30.373 04:04:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:30.373 04:04:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.373 04:04:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:30.373 /dev/nbd0 00:06:30.633 04:04:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:30.633 04:04:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:30.633 04:04:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:30.633 04:04:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:30.633 04:04:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:30.633 04:04:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:30.633 04:04:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:30.633 04:04:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:30.633 04:04:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:30.633 04:04:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:30.633 04:04:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.633 1+0 records in 00:06:30.633 1+0 records out 00:06:30.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356029 s, 11.5 MB/s 00:06:30.633 04:04:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.633 04:04:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:30.633 04:04:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.633 04:04:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:30.633 04:04:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:30.633 04:04:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.633 04:04:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.633 04:04:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:30.633 /dev/nbd1 00:06:30.633 04:04:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:30.904 04:04:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:30.904 04:04:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:30.904 04:04:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:30.904 04:04:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:30.904 04:04:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:30.904 04:04:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:30.904 04:04:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:30.904 04:04:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:30.904 04:04:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:30.904 04:04:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.904 1+0 records in 00:06:30.904 1+0 records out 00:06:30.904 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408527 s, 10.0 MB/s 00:06:30.904 04:04:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.904 04:04:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:30.904 04:04:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.904 04:04:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:30.904 04:04:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:30.904 04:04:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.904 04:04:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.904 04:04:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.904 04:04:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.904 04:04:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.180 04:04:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:31.180 { 00:06:31.180 "nbd_device": "/dev/nbd0", 00:06:31.180 "bdev_name": "Malloc0" 00:06:31.180 }, 00:06:31.180 { 00:06:31.180 "nbd_device": "/dev/nbd1", 00:06:31.180 "bdev_name": "Malloc1" 00:06:31.180 } 00:06:31.180 ]' 00:06:31.180 04:04:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:31.180 { 00:06:31.180 "nbd_device": "/dev/nbd0", 00:06:31.180 "bdev_name": "Malloc0" 00:06:31.180 }, 00:06:31.180 { 00:06:31.180 "nbd_device": "/dev/nbd1", 00:06:31.180 "bdev_name": "Malloc1" 00:06:31.180 } 00:06:31.180 ]' 00:06:31.180 04:04:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.180 04:04:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:31.181 /dev/nbd1' 00:06:31.181 04:04:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:31.181 /dev/nbd1' 00:06:31.181 04:04:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.181 04:04:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:31.181 04:04:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:31.181 04:04:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:31.181 04:04:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:31.181 04:04:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:31.181 04:04:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.181 04:04:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.181 04:04:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:31.181 04:04:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:31.181 04:04:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:31.181 04:04:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:31.181 256+0 records in 00:06:31.181 256+0 records out 00:06:31.181 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00594031 s, 177 MB/s 00:06:31.181 04:04:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.181 04:04:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:31.181 256+0 records in 00:06:31.181 256+0 records out 00:06:31.181 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175337 s, 59.8 MB/s 00:06:31.181 04:04:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.181 04:04:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:31.181 256+0 records in 00:06:31.181 256+0 records out 00:06:31.181 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250924 s, 41.8 MB/s 00:06:31.181 04:04:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:31.181 04:04:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.181 04:04:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.181 04:04:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:31.181 04:04:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:31.181 04:04:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:31.181 04:04:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:31.181 04:04:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.181 04:04:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:31.181 04:04:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.181 04:04:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:31.181 04:04:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:31.181 04:04:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:31.181 04:04:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.181 04:04:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.181 04:04:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:31.181 04:04:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:31.181 04:04:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.181 04:04:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:31.440 04:04:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:31.440 04:04:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:31.440 04:04:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:31.440 04:04:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.440 04:04:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.440 04:04:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:31.440 04:04:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.440 04:04:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.440 04:04:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.440 04:04:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:31.700 04:04:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:31.700 04:04:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:31.700 04:04:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:31.700 04:04:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.700 04:04:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.700 04:04:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:31.700 04:04:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.700 04:04:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.700 04:04:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.700 04:04:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.700 04:04:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.700 04:04:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:31.700 04:04:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:31.700 04:04:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.959 04:04:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:31.959 04:04:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.959 04:04:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:31.959 04:04:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:31.959 04:04:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:31.959 04:04:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:31.959 04:04:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:31.959 04:04:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:31.959 04:04:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:31.959 04:04:31 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:32.218 04:04:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:32.477 [2024-11-21 04:04:32.272702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.477 [2024-11-21 04:04:32.307333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.477 [2024-11-21 04:04:32.307344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.477 [2024-11-21 04:04:32.383495] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:32.477 [2024-11-21 04:04:32.383577] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:35.765 spdk_app_start Round 1 00:06:35.765 04:04:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:35.765 04:04:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:35.765 04:04:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70252 /var/tmp/spdk-nbd.sock 00:06:35.765 04:04:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70252 ']' 00:06:35.765 04:04:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.765 04:04:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.765 04:04:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.765 04:04:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.765 04:04:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.765 04:04:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.765 04:04:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:35.765 04:04:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.765 Malloc0 00:06:35.765 04:04:35 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.765 Malloc1 00:06:35.765 04:04:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.765 04:04:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.765 04:04:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.765 04:04:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:35.765 04:04:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.765 04:04:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:35.765 04:04:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.765 04:04:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.765 04:04:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.765 04:04:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:35.765 04:04:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.765 04:04:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:35.765 04:04:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:35.765 04:04:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:35.765 04:04:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.766 04:04:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:36.024 /dev/nbd0 00:06:36.024 04:04:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:36.024 04:04:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:36.024 04:04:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:36.024 04:04:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:36.024 04:04:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:36.024 04:04:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:36.024 04:04:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:36.024 04:04:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:36.024 04:04:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:36.024 04:04:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:36.024 04:04:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.024 1+0 records in 00:06:36.024 1+0 records out 00:06:36.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413979 s, 9.9 MB/s 00:06:36.024 04:04:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.024 04:04:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:36.024 04:04:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.024 04:04:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:36.024 04:04:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:36.024 04:04:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.024 04:04:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.024 04:04:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:36.282 /dev/nbd1 00:06:36.282 04:04:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:36.282 04:04:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:36.282 04:04:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:36.282 04:04:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:36.282 04:04:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:36.282 04:04:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:36.282 04:04:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:36.282 04:04:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:36.282 04:04:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:36.282 04:04:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:36.282 04:04:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.282 1+0 records in 00:06:36.282 1+0 records out 00:06:36.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361581 s, 11.3 MB/s 00:06:36.282 04:04:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.282 04:04:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:36.282 04:04:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.282 04:04:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:36.282 04:04:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:36.282 04:04:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.282 04:04:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.282 04:04:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.282 04:04:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.282 04:04:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.539 04:04:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:36.539 { 00:06:36.539 "nbd_device": "/dev/nbd0", 00:06:36.539 "bdev_name": "Malloc0" 00:06:36.539 }, 00:06:36.539 { 00:06:36.539 "nbd_device": "/dev/nbd1", 00:06:36.539 "bdev_name": "Malloc1" 00:06:36.539 } 00:06:36.539 ]' 00:06:36.539 04:04:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:36.539 { 00:06:36.539 "nbd_device": "/dev/nbd0", 00:06:36.539 "bdev_name": "Malloc0" 00:06:36.539 }, 00:06:36.539 { 00:06:36.539 "nbd_device": "/dev/nbd1", 00:06:36.539 "bdev_name": "Malloc1" 00:06:36.539 } 00:06:36.539 ]' 00:06:36.539 04:04:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:36.540 /dev/nbd1' 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:36.540 /dev/nbd1' 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:36.540 256+0 records in 00:06:36.540 256+0 records out 00:06:36.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00825731 s, 127 MB/s 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:36.540 256+0 records in 00:06:36.540 256+0 records out 00:06:36.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021845 s, 48.0 MB/s 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:36.540 256+0 records in 00:06:36.540 256+0 records out 00:06:36.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255977 s, 41.0 MB/s 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:36.540 04:04:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.797 04:04:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:36.797 04:04:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.797 04:04:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.797 04:04:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:36.797 04:04:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:36.797 04:04:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.798 04:04:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:36.798 04:04:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:36.798 04:04:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:36.798 04:04:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:36.798 04:04:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.798 04:04:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.798 04:04:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:36.798 04:04:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.798 04:04:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.798 04:04:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.798 04:04:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:37.055 04:04:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:37.055 04:04:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:37.055 04:04:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:37.055 04:04:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.055 04:04:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.055 04:04:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:37.055 04:04:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.055 04:04:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.055 04:04:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.055 04:04:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.055 04:04:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.313 04:04:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:37.313 04:04:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.313 04:04:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:37.313 04:04:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:37.313 04:04:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.313 04:04:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:37.313 04:04:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:37.313 04:04:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:37.313 04:04:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:37.313 04:04:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:37.313 04:04:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:37.313 04:04:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:37.313 04:04:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:37.571 04:04:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:37.830 [2024-11-21 04:04:37.691360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.830 [2024-11-21 04:04:37.735483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.830 [2024-11-21 04:04:37.735508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.089 [2024-11-21 04:04:37.811871] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:38.089 [2024-11-21 04:04:37.811937] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:40.620 spdk_app_start Round 2 00:06:40.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:40.620 04:04:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:40.620 04:04:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:40.620 04:04:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70252 /var/tmp/spdk-nbd.sock 00:06:40.620 04:04:40 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70252 ']' 00:06:40.620 04:04:40 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:40.620 04:04:40 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.620 04:04:40 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:40.620 04:04:40 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.620 04:04:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:40.879 04:04:40 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.879 04:04:40 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:40.879 04:04:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.138 Malloc0 00:06:41.138 04:04:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.138 Malloc1 00:06:41.138 04:04:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.138 04:04:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.138 04:04:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.138 04:04:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:41.138 04:04:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.138 04:04:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:41.138 04:04:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.138 04:04:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.138 04:04:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.398 04:04:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:41.398 04:04:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.398 04:04:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:41.398 04:04:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:41.398 04:04:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:41.398 04:04:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.398 04:04:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:41.398 /dev/nbd0 00:06:41.398 04:04:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:41.398 04:04:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:41.398 04:04:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:41.398 04:04:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:41.398 04:04:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.398 04:04:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.398 04:04:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:41.398 04:04:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:41.398 04:04:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.398 04:04:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.398 04:04:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.398 1+0 records in 00:06:41.398 1+0 records out 00:06:41.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176826 s, 23.2 MB/s 00:06:41.398 04:04:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.398 04:04:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:41.398 04:04:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.398 04:04:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.398 04:04:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:41.398 04:04:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.398 04:04:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.398 04:04:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:41.657 /dev/nbd1 00:06:41.657 04:04:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:41.657 04:04:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:41.657 04:04:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:41.657 04:04:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:41.657 04:04:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.657 04:04:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.657 04:04:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:41.657 04:04:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:41.657 04:04:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.657 04:04:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.657 04:04:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.657 1+0 records in 00:06:41.657 1+0 records out 00:06:41.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358915 s, 11.4 MB/s 00:06:41.657 04:04:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.657 04:04:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:41.657 04:04:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.657 04:04:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.657 04:04:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:41.657 04:04:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.657 04:04:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.657 04:04:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.657 04:04:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.657 04:04:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.917 04:04:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:41.917 { 00:06:41.917 "nbd_device": "/dev/nbd0", 00:06:41.917 "bdev_name": "Malloc0" 00:06:41.917 }, 00:06:41.917 { 00:06:41.917 "nbd_device": "/dev/nbd1", 00:06:41.917 "bdev_name": "Malloc1" 00:06:41.917 } 00:06:41.917 ]' 00:06:41.917 04:04:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:41.917 { 00:06:41.917 "nbd_device": "/dev/nbd0", 00:06:41.917 "bdev_name": "Malloc0" 00:06:41.917 }, 00:06:41.917 { 00:06:41.917 "nbd_device": "/dev/nbd1", 00:06:41.917 "bdev_name": "Malloc1" 00:06:41.917 } 00:06:41.917 ]' 00:06:41.917 04:04:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.917 04:04:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:41.917 /dev/nbd1' 00:06:41.917 04:04:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.917 04:04:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:41.917 /dev/nbd1' 00:06:41.917 04:04:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:41.917 04:04:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:41.917 04:04:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:41.917 04:04:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:41.917 04:04:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:41.917 04:04:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.917 04:04:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.917 04:04:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:41.917 04:04:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:41.917 04:04:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:41.917 04:04:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:41.917 256+0 records in 00:06:41.917 256+0 records out 00:06:41.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123308 s, 85.0 MB/s 00:06:41.917 04:04:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.917 04:04:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:42.176 256+0 records in 00:06:42.176 256+0 records out 00:06:42.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0181052 s, 57.9 MB/s 00:06:42.176 04:04:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.176 04:04:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:42.176 256+0 records in 00:06:42.176 256+0 records out 00:06:42.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261852 s, 40.0 MB/s 00:06:42.176 04:04:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:42.176 04:04:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.176 04:04:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.176 04:04:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:42.176 04:04:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:42.176 04:04:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:42.176 04:04:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:42.176 04:04:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.176 04:04:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:42.176 04:04:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.176 04:04:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:42.176 04:04:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:42.176 04:04:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:42.176 04:04:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.176 04:04:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.176 04:04:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:42.176 04:04:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:42.176 04:04:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.176 04:04:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:42.436 04:04:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:42.436 04:04:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:42.436 04:04:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:42.436 04:04:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.436 04:04:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.436 04:04:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:42.436 04:04:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.436 04:04:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.436 04:04:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.436 04:04:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:42.437 04:04:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:42.437 04:04:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:42.437 04:04:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:42.437 04:04:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.437 04:04:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.437 04:04:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:42.437 04:04:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.437 04:04:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.437 04:04:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.437 04:04:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.437 04:04:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.696 04:04:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:42.696 04:04:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:42.696 04:04:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.954 04:04:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:42.954 04:04:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:42.954 04:04:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.954 04:04:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:42.954 04:04:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:42.954 04:04:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:42.954 04:04:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:42.954 04:04:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:42.954 04:04:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:42.954 04:04:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:43.212 04:04:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:43.471 [2024-11-21 04:04:43.205380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.471 [2024-11-21 04:04:43.253138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.471 [2024-11-21 04:04:43.253141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.471 [2024-11-21 04:04:43.331714] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:43.471 [2024-11-21 04:04:43.331795] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:46.002 04:04:45 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70252 /var/tmp/spdk-nbd.sock 00:06:46.002 04:04:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70252 ']' 00:06:46.002 04:04:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:46.002 04:04:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:46.002 04:04:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:46.002 04:04:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.002 04:04:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.260 04:04:46 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.260 04:04:46 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:46.260 04:04:46 event.app_repeat -- event/event.sh@39 -- # killprocess 70252 00:06:46.260 04:04:46 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 70252 ']' 00:06:46.260 04:04:46 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 70252 00:06:46.260 04:04:46 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:46.260 04:04:46 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.260 04:04:46 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70252 00:06:46.260 04:04:46 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.260 04:04:46 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.260 killing process with pid 70252 00:06:46.260 04:04:46 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70252' 00:06:46.260 04:04:46 event.app_repeat -- common/autotest_common.sh@973 -- # kill 70252 00:06:46.260 04:04:46 event.app_repeat -- common/autotest_common.sh@978 -- # wait 70252 00:06:46.518 spdk_app_start is called in Round 0. 00:06:46.519 Shutdown signal received, stop current app iteration 00:06:46.519 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 reinitialization... 00:06:46.519 spdk_app_start is called in Round 1. 00:06:46.519 Shutdown signal received, stop current app iteration 00:06:46.519 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 reinitialization... 00:06:46.519 spdk_app_start is called in Round 2. 00:06:46.519 Shutdown signal received, stop current app iteration 00:06:46.519 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 reinitialization... 00:06:46.519 spdk_app_start is called in Round 3. 00:06:46.519 Shutdown signal received, stop current app iteration 00:06:46.519 04:04:46 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:46.519 04:04:46 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:46.519 00:06:46.519 real 0m17.642s 00:06:46.519 user 0m38.447s 00:06:46.519 sys 0m2.994s 00:06:46.519 04:04:46 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.519 04:04:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.519 ************************************ 00:06:46.519 END TEST app_repeat 00:06:46.519 ************************************ 00:06:46.777 04:04:46 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:46.777 04:04:46 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:46.777 04:04:46 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.777 04:04:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.777 04:04:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.777 ************************************ 00:06:46.777 START TEST cpu_locks 00:06:46.777 ************************************ 00:06:46.777 04:04:46 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:46.777 * Looking for test storage... 00:06:46.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:46.777 04:04:46 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:46.777 04:04:46 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:46.777 04:04:46 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:47.036 04:04:46 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.036 04:04:46 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:47.036 04:04:46 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.036 04:04:46 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:47.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.036 --rc genhtml_branch_coverage=1 00:06:47.036 --rc genhtml_function_coverage=1 00:06:47.036 --rc genhtml_legend=1 00:06:47.036 --rc geninfo_all_blocks=1 00:06:47.036 --rc geninfo_unexecuted_blocks=1 00:06:47.036 00:06:47.036 ' 00:06:47.036 04:04:46 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:47.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.036 --rc genhtml_branch_coverage=1 00:06:47.036 --rc genhtml_function_coverage=1 00:06:47.036 --rc genhtml_legend=1 00:06:47.036 --rc geninfo_all_blocks=1 00:06:47.036 --rc geninfo_unexecuted_blocks=1 00:06:47.036 00:06:47.036 ' 00:06:47.036 04:04:46 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:47.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.036 --rc genhtml_branch_coverage=1 00:06:47.036 --rc genhtml_function_coverage=1 00:06:47.036 --rc genhtml_legend=1 00:06:47.036 --rc geninfo_all_blocks=1 00:06:47.036 --rc geninfo_unexecuted_blocks=1 00:06:47.036 00:06:47.036 ' 00:06:47.036 04:04:46 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:47.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.036 --rc genhtml_branch_coverage=1 00:06:47.036 --rc genhtml_function_coverage=1 00:06:47.036 --rc genhtml_legend=1 00:06:47.036 --rc geninfo_all_blocks=1 00:06:47.036 --rc geninfo_unexecuted_blocks=1 00:06:47.036 00:06:47.036 ' 00:06:47.036 04:04:46 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:47.036 04:04:46 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:47.036 04:04:46 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:47.036 04:04:46 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:47.036 04:04:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.036 04:04:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.036 04:04:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.036 ************************************ 00:06:47.036 START TEST default_locks 00:06:47.036 ************************************ 00:06:47.036 04:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:47.036 04:04:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70680 00:06:47.036 04:04:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70680 00:06:47.036 04:04:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.036 04:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70680 ']' 00:06:47.036 04:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.036 04:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.036 04:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.036 04:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.036 04:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.036 [2024-11-21 04:04:46.911657] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:47.036 [2024-11-21 04:04:46.911830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70680 ] 00:06:47.295 [2024-11-21 04:04:47.068283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.295 [2024-11-21 04:04:47.115364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.862 04:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.862 04:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:47.862 04:04:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70680 00:06:47.862 04:04:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70680 00:06:47.862 04:04:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.126 04:04:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70680 00:06:48.126 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 70680 ']' 00:06:48.126 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 70680 00:06:48.126 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:48.126 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.126 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70680 00:06:48.385 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.385 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.385 killing process with pid 70680 00:06:48.385 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70680' 00:06:48.385 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 70680 00:06:48.385 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 70680 00:06:48.962 04:04:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70680 00:06:48.962 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:48.962 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70680 00:06:48.962 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:48.962 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.962 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:48.962 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.962 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 70680 00:06:48.962 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70680 ']' 00:06:48.962 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.962 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.962 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.962 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.962 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.962 ERROR: process (pid: 70680) is no longer running 00:06:48.962 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70680) - No such process 00:06:48.962 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.962 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:48.962 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:48.962 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:48.962 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:48.962 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:48.962 04:04:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:48.962 04:04:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:48.963 04:04:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:48.963 04:04:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:48.963 00:06:48.963 real 0m1.937s 00:06:48.963 user 0m1.749s 00:06:48.963 sys 0m0.739s 00:06:48.963 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.963 04:04:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.963 ************************************ 00:06:48.963 END TEST default_locks 00:06:48.963 ************************************ 00:06:48.963 04:04:48 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:48.963 04:04:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.963 04:04:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.963 04:04:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.963 ************************************ 00:06:48.963 START TEST default_locks_via_rpc 00:06:48.963 ************************************ 00:06:48.963 04:04:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:48.963 04:04:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70733 00:06:48.963 04:04:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70733 00:06:48.963 04:04:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:48.963 04:04:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70733 ']' 00:06:48.963 04:04:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.963 04:04:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.963 04:04:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.963 04:04:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.963 04:04:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.963 [2024-11-21 04:04:48.910428] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:48.963 [2024-11-21 04:04:48.910562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70733 ] 00:06:49.235 [2024-11-21 04:04:49.067532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.235 [2024-11-21 04:04:49.108159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.804 04:04:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.804 04:04:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:49.804 04:04:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:49.804 04:04:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.804 04:04:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.804 04:04:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.804 04:04:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:49.804 04:04:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:49.804 04:04:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:49.805 04:04:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:49.805 04:04:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:49.805 04:04:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.805 04:04:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.805 04:04:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.805 04:04:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70733 00:06:49.805 04:04:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70733 00:06:49.805 04:04:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.375 04:04:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70733 00:06:50.375 04:04:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 70733 ']' 00:06:50.375 04:04:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 70733 00:06:50.375 04:04:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:50.375 04:04:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.375 04:04:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70733 00:06:50.375 04:04:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.375 04:04:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.375 killing process with pid 70733 00:06:50.375 04:04:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70733' 00:06:50.375 04:04:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 70733 00:06:50.375 04:04:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 70733 00:06:50.942 00:06:50.942 real 0m2.063s 00:06:50.942 user 0m1.927s 00:06:50.942 sys 0m0.754s 00:06:50.942 04:04:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.942 04:04:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.942 ************************************ 00:06:50.942 END TEST default_locks_via_rpc 00:06:50.942 ************************************ 00:06:51.202 04:04:50 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:51.202 04:04:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.202 04:04:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.202 04:04:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.202 ************************************ 00:06:51.202 START TEST non_locking_app_on_locked_coremask 00:06:51.202 ************************************ 00:06:51.202 04:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:51.202 04:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70785 00:06:51.202 04:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.202 04:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70785 /var/tmp/spdk.sock 00:06:51.202 04:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70785 ']' 00:06:51.202 04:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.202 04:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.202 04:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.202 04:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.202 04:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.202 [2024-11-21 04:04:51.049387] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:51.202 [2024-11-21 04:04:51.049540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70785 ] 00:06:51.460 [2024-11-21 04:04:51.199482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.460 [2024-11-21 04:04:51.245276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.029 04:04:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.029 04:04:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:52.029 04:04:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70801 00:06:52.029 04:04:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:52.029 04:04:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70801 /var/tmp/spdk2.sock 00:06:52.029 04:04:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70801 ']' 00:06:52.029 04:04:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.029 04:04:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.029 04:04:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.029 04:04:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.029 04:04:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.029 [2024-11-21 04:04:51.954021] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:52.029 [2024-11-21 04:04:51.954149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70801 ] 00:06:52.288 [2024-11-21 04:04:52.104754] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.288 [2024-11-21 04:04:52.104843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.288 [2024-11-21 04:04:52.200043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.228 04:04:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.228 04:04:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:53.228 04:04:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70785 00:06:53.228 04:04:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70785 00:06:53.228 04:04:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.487 04:04:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70785 00:06:53.487 04:04:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70785 ']' 00:06:53.487 04:04:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70785 00:06:53.487 04:04:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:53.487 04:04:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.487 04:04:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70785 00:06:53.487 04:04:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.487 04:04:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.487 killing process with pid 70785 00:06:53.487 04:04:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70785' 00:06:53.487 04:04:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70785 00:06:53.487 04:04:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70785 00:06:54.867 04:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70801 00:06:54.867 04:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70801 ']' 00:06:54.867 04:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70801 00:06:54.867 04:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:54.867 04:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.867 04:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70801 00:06:54.867 04:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.867 04:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.867 killing process with pid 70801 00:06:54.867 04:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70801' 00:06:54.867 04:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70801 00:06:54.867 04:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70801 00:06:55.439 00:06:55.439 real 0m4.228s 00:06:55.439 user 0m4.124s 00:06:55.439 sys 0m1.298s 00:06:55.439 04:04:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.439 04:04:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.439 ************************************ 00:06:55.439 END TEST non_locking_app_on_locked_coremask 00:06:55.439 ************************************ 00:06:55.439 04:04:55 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:55.439 04:04:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.439 04:04:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.439 04:04:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.439 ************************************ 00:06:55.439 START TEST locking_app_on_unlocked_coremask 00:06:55.439 ************************************ 00:06:55.439 04:04:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:55.439 04:04:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70870 00:06:55.439 04:04:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:55.439 04:04:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70870 /var/tmp/spdk.sock 00:06:55.439 04:04:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70870 ']' 00:06:55.439 04:04:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.439 04:04:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.439 04:04:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.439 04:04:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.439 04:04:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.439 [2024-11-21 04:04:55.344689] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:55.440 [2024-11-21 04:04:55.344836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70870 ] 00:06:55.711 [2024-11-21 04:04:55.478785] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:55.711 [2024-11-21 04:04:55.478860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.711 [2024-11-21 04:04:55.524518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.294 04:04:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.294 04:04:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:56.294 04:04:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70886 00:06:56.294 04:04:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:56.294 04:04:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 70886 /var/tmp/spdk2.sock 00:06:56.294 04:04:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70886 ']' 00:06:56.294 04:04:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.294 04:04:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.294 04:04:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.294 04:04:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.294 04:04:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.553 [2024-11-21 04:04:56.291944] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:56.553 [2024-11-21 04:04:56.292117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70886 ] 00:06:56.553 [2024-11-21 04:04:56.444928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.813 [2024-11-21 04:04:56.539940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.383 04:04:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.383 04:04:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:57.383 04:04:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 70886 00:06:57.383 04:04:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70886 00:06:57.383 04:04:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.952 04:04:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70870 00:06:57.952 04:04:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70870 ']' 00:06:57.952 04:04:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 70870 00:06:57.952 04:04:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:57.952 04:04:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.952 04:04:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70870 00:06:57.952 04:04:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.952 04:04:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.952 killing process with pid 70870 00:06:57.952 04:04:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70870' 00:06:57.952 04:04:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 70870 00:06:57.952 04:04:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 70870 00:06:59.333 04:04:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 70886 00:06:59.333 04:04:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70886 ']' 00:06:59.333 04:04:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 70886 00:06:59.333 04:04:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:59.333 04:04:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.334 04:04:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70886 00:06:59.334 04:04:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.334 04:04:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.334 killing process with pid 70886 00:06:59.334 04:04:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70886' 00:06:59.334 04:04:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 70886 00:06:59.334 04:04:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 70886 00:06:59.903 00:06:59.903 real 0m4.429s 00:06:59.903 user 0m4.357s 00:06:59.903 sys 0m1.400s 00:06:59.903 04:04:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.903 04:04:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.903 ************************************ 00:06:59.903 END TEST locking_app_on_unlocked_coremask 00:06:59.903 ************************************ 00:06:59.903 04:04:59 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:59.903 04:04:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.903 04:04:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.903 04:04:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.903 ************************************ 00:06:59.903 START TEST locking_app_on_locked_coremask 00:06:59.903 ************************************ 00:06:59.903 04:04:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:59.903 04:04:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=70963 00:06:59.903 04:04:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.903 04:04:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 70963 /var/tmp/spdk.sock 00:06:59.903 04:04:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70963 ']' 00:06:59.903 04:04:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.903 04:04:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.903 04:04:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.903 04:04:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.903 04:04:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.903 [2024-11-21 04:04:59.836099] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:59.903 [2024-11-21 04:04:59.836288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70963 ] 00:07:00.164 [2024-11-21 04:04:59.971100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.164 [2024-11-21 04:05:00.017287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.734 04:05:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.734 04:05:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:00.734 04:05:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:00.734 04:05:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=70973 00:07:00.734 04:05:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 70973 /var/tmp/spdk2.sock 00:07:00.734 04:05:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:00.734 04:05:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70973 /var/tmp/spdk2.sock 00:07:00.734 04:05:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:00.734 04:05:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.734 04:05:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:00.734 04:05:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.734 04:05:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 70973 /var/tmp/spdk2.sock 00:07:00.734 04:05:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70973 ']' 00:07:00.734 04:05:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.734 04:05:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.734 04:05:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.734 04:05:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.734 04:05:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.994 [2024-11-21 04:05:00.753891] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:00.994 [2024-11-21 04:05:00.754044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70973 ] 00:07:00.994 [2024-11-21 04:05:00.906633] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 70963 has claimed it. 00:07:00.994 [2024-11-21 04:05:00.906724] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:01.563 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70973) - No such process 00:07:01.563 ERROR: process (pid: 70973) is no longer running 00:07:01.563 04:05:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.563 04:05:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:01.563 04:05:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:01.563 04:05:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:01.563 04:05:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:01.563 04:05:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:01.563 04:05:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 70963 00:07:01.563 04:05:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70963 00:07:01.563 04:05:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.139 04:05:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 70963 00:07:02.139 04:05:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70963 ']' 00:07:02.139 04:05:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70963 00:07:02.139 04:05:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:02.139 04:05:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.139 04:05:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70963 00:07:02.139 04:05:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.139 04:05:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.139 killing process with pid 70963 00:07:02.139 04:05:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70963' 00:07:02.139 04:05:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70963 00:07:02.139 04:05:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70963 00:07:02.719 00:07:02.719 real 0m2.746s 00:07:02.719 user 0m2.821s 00:07:02.719 sys 0m0.876s 00:07:02.719 04:05:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.719 04:05:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.719 ************************************ 00:07:02.719 END TEST locking_app_on_locked_coremask 00:07:02.719 ************************************ 00:07:02.719 04:05:02 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:02.719 04:05:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.719 04:05:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.719 04:05:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.719 ************************************ 00:07:02.719 START TEST locking_overlapped_coremask 00:07:02.719 ************************************ 00:07:02.719 04:05:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:02.719 04:05:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71026 00:07:02.719 04:05:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:02.719 04:05:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71026 /var/tmp/spdk.sock 00:07:02.719 04:05:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71026 ']' 00:07:02.719 04:05:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.719 04:05:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.719 04:05:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.719 04:05:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.719 04:05:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.719 [2024-11-21 04:05:02.652859] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:02.719 [2024-11-21 04:05:02.653032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71026 ] 00:07:02.978 [2024-11-21 04:05:02.815197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:02.978 [2024-11-21 04:05:02.859748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.978 [2024-11-21 04:05:02.859829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.978 [2024-11-21 04:05:02.859990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.547 04:05:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.547 04:05:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:03.547 04:05:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:03.547 04:05:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71044 00:07:03.547 04:05:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71044 /var/tmp/spdk2.sock 00:07:03.547 04:05:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:03.547 04:05:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71044 /var/tmp/spdk2.sock 00:07:03.547 04:05:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:03.547 04:05:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.547 04:05:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:03.547 04:05:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.547 04:05:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71044 /var/tmp/spdk2.sock 00:07:03.547 04:05:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71044 ']' 00:07:03.547 04:05:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.547 04:05:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.547 04:05:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.547 04:05:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.547 04:05:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.806 [2024-11-21 04:05:03.538960] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:03.806 [2024-11-21 04:05:03.539140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71044 ] 00:07:03.806 [2024-11-21 04:05:03.690787] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71026 has claimed it. 00:07:03.806 [2024-11-21 04:05:03.690874] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:04.374 ERROR: process (pid: 71044) is no longer running 00:07:04.374 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71044) - No such process 00:07:04.374 04:05:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.374 04:05:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:04.374 04:05:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:04.374 04:05:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:04.374 04:05:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:04.374 04:05:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:04.374 04:05:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:04.374 04:05:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:04.374 04:05:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:04.374 04:05:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:04.374 04:05:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71026 00:07:04.374 04:05:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 71026 ']' 00:07:04.374 04:05:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 71026 00:07:04.374 04:05:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:04.374 04:05:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.374 04:05:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71026 00:07:04.374 04:05:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.374 04:05:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.374 04:05:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71026' 00:07:04.374 killing process with pid 71026 00:07:04.374 04:05:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 71026 00:07:04.374 04:05:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 71026 00:07:04.942 00:07:04.942 real 0m2.301s 00:07:04.942 user 0m5.971s 00:07:04.942 sys 0m0.671s 00:07:04.942 04:05:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.942 ************************************ 00:07:04.942 END TEST locking_overlapped_coremask 00:07:04.942 ************************************ 00:07:04.942 04:05:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.942 04:05:04 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:04.942 04:05:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.942 04:05:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.942 04:05:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.202 ************************************ 00:07:05.202 START TEST locking_overlapped_coremask_via_rpc 00:07:05.202 ************************************ 00:07:05.202 04:05:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:05.202 04:05:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71094 00:07:05.202 04:05:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:05.202 04:05:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71094 /var/tmp/spdk.sock 00:07:05.202 04:05:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71094 ']' 00:07:05.202 04:05:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.202 04:05:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.202 04:05:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.202 04:05:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.202 04:05:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.202 [2024-11-21 04:05:05.021889] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:05.202 [2024-11-21 04:05:05.022057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71094 ] 00:07:05.460 [2024-11-21 04:05:05.177632] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:05.460 [2024-11-21 04:05:05.177693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:05.460 [2024-11-21 04:05:05.223408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.460 [2024-11-21 04:05:05.223503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.460 [2024-11-21 04:05:05.223651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.027 04:05:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.027 04:05:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:06.027 04:05:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71106 00:07:06.027 04:05:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:06.027 04:05:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71106 /var/tmp/spdk2.sock 00:07:06.027 04:05:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71106 ']' 00:07:06.027 04:05:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.027 04:05:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.027 04:05:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.027 04:05:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.027 04:05:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.027 [2024-11-21 04:05:05.950468] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:06.027 [2024-11-21 04:05:05.950724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71106 ] 00:07:06.287 [2024-11-21 04:05:06.103976] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:06.287 [2024-11-21 04:05:06.104054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.287 [2024-11-21 04:05:06.200883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.287 [2024-11-21 04:05:06.202280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.287 [2024-11-21 04:05:06.202343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.224 [2024-11-21 04:05:06.890401] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71094 has claimed it. 00:07:07.224 request: 00:07:07.224 { 00:07:07.224 "method": "framework_enable_cpumask_locks", 00:07:07.224 "req_id": 1 00:07:07.224 } 00:07:07.224 Got JSON-RPC error response 00:07:07.224 response: 00:07:07.224 { 00:07:07.224 "code": -32603, 00:07:07.224 "message": "Failed to claim CPU core: 2" 00:07:07.224 } 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71094 /var/tmp/spdk.sock 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71094 ']' 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.224 04:05:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.224 04:05:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.224 04:05:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:07.224 04:05:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71106 /var/tmp/spdk2.sock 00:07:07.224 04:05:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71106 ']' 00:07:07.224 04:05:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.224 04:05:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.224 04:05:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.224 04:05:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.224 04:05:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.484 04:05:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.484 04:05:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:07.484 04:05:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:07.484 04:05:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:07.484 04:05:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:07.484 04:05:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:07.484 00:07:07.484 real 0m2.476s 00:07:07.484 user 0m1.151s 00:07:07.484 sys 0m0.162s 00:07:07.484 04:05:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.484 04:05:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.484 ************************************ 00:07:07.484 END TEST locking_overlapped_coremask_via_rpc 00:07:07.484 ************************************ 00:07:07.484 04:05:07 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:07.484 04:05:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71094 ]] 00:07:07.484 04:05:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71094 00:07:07.484 04:05:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71094 ']' 00:07:07.484 04:05:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71094 00:07:07.484 04:05:07 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:07.743 04:05:07 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.743 04:05:07 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71094 00:07:07.743 killing process with pid 71094 00:07:07.743 04:05:07 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.743 04:05:07 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.743 04:05:07 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71094' 00:07:07.743 04:05:07 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71094 00:07:07.743 04:05:07 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71094 00:07:08.310 04:05:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71106 ]] 00:07:08.310 04:05:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71106 00:07:08.310 04:05:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71106 ']' 00:07:08.310 04:05:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71106 00:07:08.310 04:05:08 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:08.310 04:05:08 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.310 04:05:08 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71106 00:07:08.310 killing process with pid 71106 00:07:08.310 04:05:08 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:08.310 04:05:08 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:08.310 04:05:08 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71106' 00:07:08.310 04:05:08 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71106 00:07:08.310 04:05:08 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71106 00:07:08.936 04:05:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:08.936 04:05:08 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:08.936 04:05:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71094 ]] 00:07:08.936 04:05:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71094 00:07:08.936 04:05:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71094 ']' 00:07:08.936 Process with pid 71094 is not found 00:07:08.936 04:05:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71094 00:07:08.936 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71094) - No such process 00:07:08.936 04:05:08 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71094 is not found' 00:07:08.936 Process with pid 71106 is not found 00:07:08.936 04:05:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71106 ]] 00:07:08.936 04:05:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71106 00:07:08.936 04:05:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71106 ']' 00:07:08.936 04:05:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71106 00:07:08.936 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71106) - No such process 00:07:08.936 04:05:08 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71106 is not found' 00:07:08.936 04:05:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:08.936 ************************************ 00:07:08.936 END TEST cpu_locks 00:07:08.936 ************************************ 00:07:08.936 00:07:08.936 real 0m22.237s 00:07:08.936 user 0m35.843s 00:07:08.936 sys 0m7.363s 00:07:08.936 04:05:08 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.936 04:05:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.936 ************************************ 00:07:08.936 END TEST event 00:07:08.936 ************************************ 00:07:08.936 00:07:08.936 real 0m50.699s 00:07:08.936 user 1m34.163s 00:07:08.936 sys 0m11.658s 00:07:08.936 04:05:08 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.936 04:05:08 event -- common/autotest_common.sh@10 -- # set +x 00:07:09.195 04:05:08 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:09.195 04:05:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.195 04:05:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.195 04:05:08 -- common/autotest_common.sh@10 -- # set +x 00:07:09.195 ************************************ 00:07:09.195 START TEST thread 00:07:09.195 ************************************ 00:07:09.195 04:05:08 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:09.195 * Looking for test storage... 00:07:09.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:09.195 04:05:09 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:09.195 04:05:09 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:09.195 04:05:09 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:09.195 04:05:09 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:09.195 04:05:09 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.195 04:05:09 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.195 04:05:09 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.195 04:05:09 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.195 04:05:09 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.195 04:05:09 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.195 04:05:09 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.195 04:05:09 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.195 04:05:09 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.195 04:05:09 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.195 04:05:09 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.195 04:05:09 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:09.195 04:05:09 thread -- scripts/common.sh@345 -- # : 1 00:07:09.195 04:05:09 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.195 04:05:09 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.195 04:05:09 thread -- scripts/common.sh@365 -- # decimal 1 00:07:09.195 04:05:09 thread -- scripts/common.sh@353 -- # local d=1 00:07:09.195 04:05:09 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.195 04:05:09 thread -- scripts/common.sh@355 -- # echo 1 00:07:09.195 04:05:09 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.195 04:05:09 thread -- scripts/common.sh@366 -- # decimal 2 00:07:09.195 04:05:09 thread -- scripts/common.sh@353 -- # local d=2 00:07:09.195 04:05:09 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.195 04:05:09 thread -- scripts/common.sh@355 -- # echo 2 00:07:09.195 04:05:09 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.195 04:05:09 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.195 04:05:09 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.195 04:05:09 thread -- scripts/common.sh@368 -- # return 0 00:07:09.195 04:05:09 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.195 04:05:09 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:09.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.195 --rc genhtml_branch_coverage=1 00:07:09.195 --rc genhtml_function_coverage=1 00:07:09.195 --rc genhtml_legend=1 00:07:09.195 --rc geninfo_all_blocks=1 00:07:09.195 --rc geninfo_unexecuted_blocks=1 00:07:09.195 00:07:09.195 ' 00:07:09.195 04:05:09 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:09.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.195 --rc genhtml_branch_coverage=1 00:07:09.195 --rc genhtml_function_coverage=1 00:07:09.195 --rc genhtml_legend=1 00:07:09.195 --rc geninfo_all_blocks=1 00:07:09.195 --rc geninfo_unexecuted_blocks=1 00:07:09.195 00:07:09.195 ' 00:07:09.195 04:05:09 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:09.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.195 --rc genhtml_branch_coverage=1 00:07:09.195 --rc genhtml_function_coverage=1 00:07:09.195 --rc genhtml_legend=1 00:07:09.195 --rc geninfo_all_blocks=1 00:07:09.195 --rc geninfo_unexecuted_blocks=1 00:07:09.195 00:07:09.195 ' 00:07:09.195 04:05:09 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:09.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.195 --rc genhtml_branch_coverage=1 00:07:09.195 --rc genhtml_function_coverage=1 00:07:09.195 --rc genhtml_legend=1 00:07:09.195 --rc geninfo_all_blocks=1 00:07:09.195 --rc geninfo_unexecuted_blocks=1 00:07:09.195 00:07:09.195 ' 00:07:09.195 04:05:09 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:09.195 04:05:09 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:09.195 04:05:09 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.195 04:05:09 thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.195 ************************************ 00:07:09.195 START TEST thread_poller_perf 00:07:09.195 ************************************ 00:07:09.195 04:05:09 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:09.454 [2024-11-21 04:05:09.205079] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:09.454 [2024-11-21 04:05:09.205723] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71250 ] 00:07:09.454 [2024-11-21 04:05:09.364362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.454 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:09.454 [2024-11-21 04:05:09.402255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.834 [2024-11-21T04:05:10.807Z] ====================================== 00:07:10.834 [2024-11-21T04:05:10.807Z] busy:2302224478 (cyc) 00:07:10.834 [2024-11-21T04:05:10.807Z] total_run_count: 409000 00:07:10.834 [2024-11-21T04:05:10.807Z] tsc_hz: 2290000000 (cyc) 00:07:10.834 [2024-11-21T04:05:10.807Z] ====================================== 00:07:10.834 [2024-11-21T04:05:10.807Z] poller_cost: 5628 (cyc), 2457 (nsec) 00:07:10.834 00:07:10.834 real 0m1.329s 00:07:10.834 user 0m1.136s 00:07:10.834 sys 0m0.086s 00:07:10.834 04:05:10 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.834 04:05:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:10.834 ************************************ 00:07:10.834 END TEST thread_poller_perf 00:07:10.834 ************************************ 00:07:10.834 04:05:10 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:10.834 04:05:10 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:10.834 04:05:10 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.834 04:05:10 thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.834 ************************************ 00:07:10.834 START TEST thread_poller_perf 00:07:10.834 ************************************ 00:07:10.834 04:05:10 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:10.834 [2024-11-21 04:05:10.602033] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:10.834 [2024-11-21 04:05:10.602254] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71286 ] 00:07:10.834 [2024-11-21 04:05:10.759139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.834 [2024-11-21 04:05:10.799010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.834 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:12.213 [2024-11-21T04:05:12.186Z] ====================================== 00:07:12.213 [2024-11-21T04:05:12.186Z] busy:2293664804 (cyc) 00:07:12.213 [2024-11-21T04:05:12.186Z] total_run_count: 5257000 00:07:12.213 [2024-11-21T04:05:12.186Z] tsc_hz: 2290000000 (cyc) 00:07:12.213 [2024-11-21T04:05:12.186Z] ====================================== 00:07:12.213 [2024-11-21T04:05:12.186Z] poller_cost: 436 (cyc), 190 (nsec) 00:07:12.213 00:07:12.213 real 0m1.326s 00:07:12.213 user 0m1.129s 00:07:12.213 sys 0m0.090s 00:07:12.213 04:05:11 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.213 04:05:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:12.213 ************************************ 00:07:12.213 END TEST thread_poller_perf 00:07:12.213 ************************************ 00:07:12.213 04:05:11 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:12.213 ************************************ 00:07:12.213 END TEST thread 00:07:12.213 ************************************ 00:07:12.213 00:07:12.213 real 0m3.027s 00:07:12.213 user 0m2.420s 00:07:12.213 sys 0m0.404s 00:07:12.213 04:05:11 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.213 04:05:11 thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.213 04:05:12 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:12.213 04:05:12 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:12.213 04:05:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.213 04:05:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.213 04:05:12 -- common/autotest_common.sh@10 -- # set +x 00:07:12.213 ************************************ 00:07:12.213 START TEST app_cmdline 00:07:12.213 ************************************ 00:07:12.213 04:05:12 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:12.213 * Looking for test storage... 00:07:12.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:12.213 04:05:12 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:12.213 04:05:12 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:12.213 04:05:12 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:12.473 04:05:12 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.473 04:05:12 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:12.473 04:05:12 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.473 04:05:12 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:12.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.473 --rc genhtml_branch_coverage=1 00:07:12.473 --rc genhtml_function_coverage=1 00:07:12.473 --rc genhtml_legend=1 00:07:12.473 --rc geninfo_all_blocks=1 00:07:12.473 --rc geninfo_unexecuted_blocks=1 00:07:12.473 00:07:12.473 ' 00:07:12.473 04:05:12 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:12.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.473 --rc genhtml_branch_coverage=1 00:07:12.473 --rc genhtml_function_coverage=1 00:07:12.473 --rc genhtml_legend=1 00:07:12.473 --rc geninfo_all_blocks=1 00:07:12.473 --rc geninfo_unexecuted_blocks=1 00:07:12.473 00:07:12.473 ' 00:07:12.473 04:05:12 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:12.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.473 --rc genhtml_branch_coverage=1 00:07:12.473 --rc genhtml_function_coverage=1 00:07:12.473 --rc genhtml_legend=1 00:07:12.473 --rc geninfo_all_blocks=1 00:07:12.473 --rc geninfo_unexecuted_blocks=1 00:07:12.473 00:07:12.473 ' 00:07:12.473 04:05:12 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:12.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.473 --rc genhtml_branch_coverage=1 00:07:12.473 --rc genhtml_function_coverage=1 00:07:12.473 --rc genhtml_legend=1 00:07:12.473 --rc geninfo_all_blocks=1 00:07:12.473 --rc geninfo_unexecuted_blocks=1 00:07:12.473 00:07:12.473 ' 00:07:12.473 04:05:12 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:12.473 04:05:12 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71370 00:07:12.473 04:05:12 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:12.473 04:05:12 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71370 00:07:12.473 04:05:12 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 71370 ']' 00:07:12.473 04:05:12 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.473 04:05:12 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.473 04:05:12 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.473 04:05:12 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.473 04:05:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:12.473 [2024-11-21 04:05:12.345205] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:12.473 [2024-11-21 04:05:12.345449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71370 ] 00:07:12.734 [2024-11-21 04:05:12.501604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.734 [2024-11-21 04:05:12.540749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.304 04:05:13 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.304 04:05:13 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:13.304 04:05:13 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:13.563 { 00:07:13.563 "version": "SPDK v25.01-pre git sha1 557f022f6", 00:07:13.563 "fields": { 00:07:13.563 "major": 25, 00:07:13.563 "minor": 1, 00:07:13.563 "patch": 0, 00:07:13.563 "suffix": "-pre", 00:07:13.563 "commit": "557f022f6" 00:07:13.563 } 00:07:13.563 } 00:07:13.563 04:05:13 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:13.563 04:05:13 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:13.563 04:05:13 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:13.563 04:05:13 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:13.563 04:05:13 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:13.563 04:05:13 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:13.563 04:05:13 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:13.563 04:05:13 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.563 04:05:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:13.563 04:05:13 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.563 04:05:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:13.563 04:05:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:13.563 04:05:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:13.563 04:05:13 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:13.563 04:05:13 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:13.563 04:05:13 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:13.563 04:05:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.563 04:05:13 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:13.563 04:05:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.563 04:05:13 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:13.563 04:05:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.563 04:05:13 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:13.563 04:05:13 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:13.563 04:05:13 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:13.823 request: 00:07:13.823 { 00:07:13.823 "method": "env_dpdk_get_mem_stats", 00:07:13.823 "req_id": 1 00:07:13.823 } 00:07:13.823 Got JSON-RPC error response 00:07:13.823 response: 00:07:13.823 { 00:07:13.823 "code": -32601, 00:07:13.823 "message": "Method not found" 00:07:13.823 } 00:07:13.823 04:05:13 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:13.823 04:05:13 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:13.823 04:05:13 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:13.823 04:05:13 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:13.823 04:05:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71370 00:07:13.823 04:05:13 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 71370 ']' 00:07:13.823 04:05:13 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 71370 00:07:13.823 04:05:13 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:13.823 04:05:13 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.823 04:05:13 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71370 00:07:13.823 killing process with pid 71370 00:07:13.823 04:05:13 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.823 04:05:13 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.823 04:05:13 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71370' 00:07:13.823 04:05:13 app_cmdline -- common/autotest_common.sh@973 -- # kill 71370 00:07:13.823 04:05:13 app_cmdline -- common/autotest_common.sh@978 -- # wait 71370 00:07:14.391 00:07:14.391 real 0m2.251s 00:07:14.391 user 0m2.357s 00:07:14.391 sys 0m0.694s 00:07:14.391 04:05:14 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.391 ************************************ 00:07:14.391 END TEST app_cmdline 00:07:14.392 ************************************ 00:07:14.392 04:05:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:14.392 04:05:14 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:14.392 04:05:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.392 04:05:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.392 04:05:14 -- common/autotest_common.sh@10 -- # set +x 00:07:14.392 ************************************ 00:07:14.392 START TEST version 00:07:14.392 ************************************ 00:07:14.392 04:05:14 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:14.652 * Looking for test storage... 00:07:14.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:14.652 04:05:14 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:14.652 04:05:14 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:14.652 04:05:14 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:14.652 04:05:14 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:14.652 04:05:14 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.652 04:05:14 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.652 04:05:14 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.652 04:05:14 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.652 04:05:14 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.652 04:05:14 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.652 04:05:14 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.652 04:05:14 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.652 04:05:14 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.652 04:05:14 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.652 04:05:14 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.652 04:05:14 version -- scripts/common.sh@344 -- # case "$op" in 00:07:14.652 04:05:14 version -- scripts/common.sh@345 -- # : 1 00:07:14.652 04:05:14 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.652 04:05:14 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.652 04:05:14 version -- scripts/common.sh@365 -- # decimal 1 00:07:14.652 04:05:14 version -- scripts/common.sh@353 -- # local d=1 00:07:14.652 04:05:14 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.652 04:05:14 version -- scripts/common.sh@355 -- # echo 1 00:07:14.652 04:05:14 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.652 04:05:14 version -- scripts/common.sh@366 -- # decimal 2 00:07:14.652 04:05:14 version -- scripts/common.sh@353 -- # local d=2 00:07:14.652 04:05:14 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.652 04:05:14 version -- scripts/common.sh@355 -- # echo 2 00:07:14.652 04:05:14 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.652 04:05:14 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.652 04:05:14 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.652 04:05:14 version -- scripts/common.sh@368 -- # return 0 00:07:14.652 04:05:14 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.652 04:05:14 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:14.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.652 --rc genhtml_branch_coverage=1 00:07:14.652 --rc genhtml_function_coverage=1 00:07:14.652 --rc genhtml_legend=1 00:07:14.652 --rc geninfo_all_blocks=1 00:07:14.652 --rc geninfo_unexecuted_blocks=1 00:07:14.652 00:07:14.652 ' 00:07:14.652 04:05:14 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:14.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.652 --rc genhtml_branch_coverage=1 00:07:14.652 --rc genhtml_function_coverage=1 00:07:14.652 --rc genhtml_legend=1 00:07:14.652 --rc geninfo_all_blocks=1 00:07:14.652 --rc geninfo_unexecuted_blocks=1 00:07:14.652 00:07:14.652 ' 00:07:14.652 04:05:14 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:14.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.652 --rc genhtml_branch_coverage=1 00:07:14.652 --rc genhtml_function_coverage=1 00:07:14.652 --rc genhtml_legend=1 00:07:14.652 --rc geninfo_all_blocks=1 00:07:14.652 --rc geninfo_unexecuted_blocks=1 00:07:14.652 00:07:14.652 ' 00:07:14.652 04:05:14 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:14.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.652 --rc genhtml_branch_coverage=1 00:07:14.652 --rc genhtml_function_coverage=1 00:07:14.652 --rc genhtml_legend=1 00:07:14.652 --rc geninfo_all_blocks=1 00:07:14.652 --rc geninfo_unexecuted_blocks=1 00:07:14.652 00:07:14.652 ' 00:07:14.652 04:05:14 version -- app/version.sh@17 -- # get_header_version major 00:07:14.652 04:05:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:14.652 04:05:14 version -- app/version.sh@14 -- # cut -f2 00:07:14.652 04:05:14 version -- app/version.sh@14 -- # tr -d '"' 00:07:14.652 04:05:14 version -- app/version.sh@17 -- # major=25 00:07:14.652 04:05:14 version -- app/version.sh@18 -- # get_header_version minor 00:07:14.652 04:05:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:14.652 04:05:14 version -- app/version.sh@14 -- # cut -f2 00:07:14.652 04:05:14 version -- app/version.sh@14 -- # tr -d '"' 00:07:14.652 04:05:14 version -- app/version.sh@18 -- # minor=1 00:07:14.652 04:05:14 version -- app/version.sh@19 -- # get_header_version patch 00:07:14.652 04:05:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:14.652 04:05:14 version -- app/version.sh@14 -- # cut -f2 00:07:14.652 04:05:14 version -- app/version.sh@14 -- # tr -d '"' 00:07:14.652 04:05:14 version -- app/version.sh@19 -- # patch=0 00:07:14.652 04:05:14 version -- app/version.sh@20 -- # get_header_version suffix 00:07:14.652 04:05:14 version -- app/version.sh@14 -- # cut -f2 00:07:14.652 04:05:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:14.652 04:05:14 version -- app/version.sh@14 -- # tr -d '"' 00:07:14.652 04:05:14 version -- app/version.sh@20 -- # suffix=-pre 00:07:14.652 04:05:14 version -- app/version.sh@22 -- # version=25.1 00:07:14.652 04:05:14 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:14.652 04:05:14 version -- app/version.sh@28 -- # version=25.1rc0 00:07:14.653 04:05:14 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:14.653 04:05:14 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:14.913 04:05:14 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:14.913 04:05:14 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:14.913 ************************************ 00:07:14.913 END TEST version 00:07:14.913 ************************************ 00:07:14.913 00:07:14.913 real 0m0.323s 00:07:14.913 user 0m0.187s 00:07:14.913 sys 0m0.187s 00:07:14.913 04:05:14 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.913 04:05:14 version -- common/autotest_common.sh@10 -- # set +x 00:07:14.913 04:05:14 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:14.913 04:05:14 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:14.913 04:05:14 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:14.913 04:05:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.913 04:05:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.913 04:05:14 -- common/autotest_common.sh@10 -- # set +x 00:07:14.913 ************************************ 00:07:14.913 START TEST bdev_raid 00:07:14.913 ************************************ 00:07:14.913 04:05:14 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:14.913 * Looking for test storage... 00:07:14.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:14.913 04:05:14 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:14.913 04:05:14 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:07:14.913 04:05:14 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:15.174 04:05:14 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.174 04:05:14 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:15.174 04:05:14 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.174 04:05:14 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:15.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.174 --rc genhtml_branch_coverage=1 00:07:15.174 --rc genhtml_function_coverage=1 00:07:15.174 --rc genhtml_legend=1 00:07:15.174 --rc geninfo_all_blocks=1 00:07:15.174 --rc geninfo_unexecuted_blocks=1 00:07:15.174 00:07:15.174 ' 00:07:15.174 04:05:14 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:15.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.174 --rc genhtml_branch_coverage=1 00:07:15.174 --rc genhtml_function_coverage=1 00:07:15.174 --rc genhtml_legend=1 00:07:15.174 --rc geninfo_all_blocks=1 00:07:15.174 --rc geninfo_unexecuted_blocks=1 00:07:15.174 00:07:15.174 ' 00:07:15.174 04:05:14 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:15.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.174 --rc genhtml_branch_coverage=1 00:07:15.174 --rc genhtml_function_coverage=1 00:07:15.174 --rc genhtml_legend=1 00:07:15.174 --rc geninfo_all_blocks=1 00:07:15.174 --rc geninfo_unexecuted_blocks=1 00:07:15.174 00:07:15.174 ' 00:07:15.174 04:05:14 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:15.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.174 --rc genhtml_branch_coverage=1 00:07:15.174 --rc genhtml_function_coverage=1 00:07:15.174 --rc genhtml_legend=1 00:07:15.174 --rc geninfo_all_blocks=1 00:07:15.174 --rc geninfo_unexecuted_blocks=1 00:07:15.174 00:07:15.174 ' 00:07:15.174 04:05:14 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:15.174 04:05:14 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:15.174 04:05:14 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:15.174 04:05:14 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:15.174 04:05:14 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:15.174 04:05:14 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:15.174 04:05:14 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:15.174 04:05:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.174 04:05:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.174 04:05:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:15.174 ************************************ 00:07:15.174 START TEST raid1_resize_data_offset_test 00:07:15.174 ************************************ 00:07:15.174 Process raid pid: 71535 00:07:15.174 04:05:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:07:15.174 04:05:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=71535 00:07:15.175 04:05:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 71535' 00:07:15.175 04:05:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:15.175 04:05:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 71535 00:07:15.175 04:05:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 71535 ']' 00:07:15.175 04:05:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.175 04:05:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.175 04:05:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.175 04:05:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.175 04:05:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.175 [2024-11-21 04:05:15.034905] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:15.175 [2024-11-21 04:05:15.035113] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.435 [2024-11-21 04:05:15.169449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.435 [2024-11-21 04:05:15.211952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.435 [2024-11-21 04:05:15.288216] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.435 [2024-11-21 04:05:15.288338] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.005 04:05:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.005 04:05:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:07:16.005 04:05:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:16.005 04:05:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.005 04:05:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.005 malloc0 00:07:16.005 04:05:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.005 04:05:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:16.005 04:05:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.005 04:05:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.005 malloc1 00:07:16.005 04:05:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.005 04:05:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:16.005 04:05:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.005 04:05:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.005 null0 00:07:16.005 04:05:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.005 04:05:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:16.005 04:05:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.005 04:05:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.005 [2024-11-21 04:05:15.956541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:16.005 [2024-11-21 04:05:15.958683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:16.005 [2024-11-21 04:05:15.958727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:16.005 [2024-11-21 04:05:15.958878] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:16.005 [2024-11-21 04:05:15.958889] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:16.005 [2024-11-21 04:05:15.959161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:07:16.005 [2024-11-21 04:05:15.959319] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:16.005 [2024-11-21 04:05:15.959333] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:07:16.005 [2024-11-21 04:05:15.959494] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.005 04:05:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.005 04:05:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.005 04:05:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.005 04:05:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.005 04:05:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:16.005 04:05:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.266 04:05:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:16.266 04:05:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:16.266 04:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.266 04:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.266 [2024-11-21 04:05:16.020396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:16.266 04:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.266 04:05:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:16.266 04:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.266 04:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.266 malloc2 00:07:16.266 04:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.266 04:05:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:16.266 04:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.266 04:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.266 [2024-11-21 04:05:16.233535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:16.526 [2024-11-21 04:05:16.242572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:16.526 04:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.526 [2024-11-21 04:05:16.244837] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:16.526 04:05:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.526 04:05:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:16.526 04:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.526 04:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.526 04:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.526 04:05:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:16.526 04:05:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 71535 00:07:16.526 04:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 71535 ']' 00:07:16.526 04:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 71535 00:07:16.526 04:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:07:16.526 04:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.526 04:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71535 00:07:16.526 04:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.527 04:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.527 04:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71535' 00:07:16.527 killing process with pid 71535 00:07:16.527 04:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 71535 00:07:16.527 04:05:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 71535 00:07:16.527 [2024-11-21 04:05:16.328925] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:16.527 [2024-11-21 04:05:16.330255] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:16.527 [2024-11-21 04:05:16.330321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.527 [2024-11-21 04:05:16.330344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:16.527 [2024-11-21 04:05:16.340318] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.527 [2024-11-21 04:05:16.340656] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:16.527 [2024-11-21 04:05:16.340682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:07:16.787 [2024-11-21 04:05:16.738125] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:17.396 ************************************ 00:07:17.396 END TEST raid1_resize_data_offset_test 00:07:17.396 ************************************ 00:07:17.396 04:05:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:17.396 00:07:17.396 real 0m2.107s 00:07:17.396 user 0m1.948s 00:07:17.396 sys 0m0.588s 00:07:17.396 04:05:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.396 04:05:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.396 04:05:17 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:17.396 04:05:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:17.396 04:05:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.396 04:05:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:17.396 ************************************ 00:07:17.396 START TEST raid0_resize_superblock_test 00:07:17.396 ************************************ 00:07:17.396 04:05:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:07:17.396 04:05:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:17.396 04:05:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71591 00:07:17.396 Process raid pid: 71591 00:07:17.397 04:05:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:17.397 04:05:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71591' 00:07:17.397 04:05:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71591 00:07:17.397 04:05:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 71591 ']' 00:07:17.397 04:05:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.397 04:05:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.397 04:05:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.397 04:05:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.397 04:05:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.397 [2024-11-21 04:05:17.212922] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:17.397 [2024-11-21 04:05:17.213119] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.659 [2024-11-21 04:05:17.370163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.659 [2024-11-21 04:05:17.408777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.660 [2024-11-21 04:05:17.485149] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.660 [2024-11-21 04:05:17.485315] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.231 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.231 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:18.231 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:18.231 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.231 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.491 malloc0 00:07:18.491 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.491 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:18.491 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.491 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.491 [2024-11-21 04:05:18.253778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:18.491 [2024-11-21 04:05:18.253907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.491 [2024-11-21 04:05:18.253944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:18.491 [2024-11-21 04:05:18.253959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.491 [2024-11-21 04:05:18.256560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.491 [2024-11-21 04:05:18.256605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:18.491 pt0 00:07:18.491 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.491 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:18.492 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.492 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.492 fa10134d-9bec-4865-977b-1a3dc4171a54 00:07:18.492 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.492 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:18.492 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.492 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.492 0c9c9033-1887-4570-89f9-91b43e289807 00:07:18.492 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.492 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:18.492 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.492 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.492 3f9b0ee9-7838-4415-b571-72c39f7116ce 00:07:18.492 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.492 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:18.492 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:18.492 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.492 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.752 [2024-11-21 04:05:18.464505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0c9c9033-1887-4570-89f9-91b43e289807 is claimed 00:07:18.752 [2024-11-21 04:05:18.464601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3f9b0ee9-7838-4415-b571-72c39f7116ce is claimed 00:07:18.752 [2024-11-21 04:05:18.464711] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:18.752 [2024-11-21 04:05:18.464724] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:18.752 [2024-11-21 04:05:18.465038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:18.752 [2024-11-21 04:05:18.465221] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:18.752 [2024-11-21 04:05:18.465248] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:07:18.752 [2024-11-21 04:05:18.465381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.752 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.752 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:18.752 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:18.752 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.752 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.752 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.752 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:18.752 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:18.752 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:18.752 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.752 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.752 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.752 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:18.752 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:18.752 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:18.752 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:18.752 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:18.752 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.752 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.752 [2024-11-21 04:05:18.564595] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:18.752 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.752 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:18.752 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:18.752 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:18.752 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:18.752 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.752 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.752 [2024-11-21 04:05:18.608500] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:18.752 [2024-11-21 04:05:18.608530] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '0c9c9033-1887-4570-89f9-91b43e289807' was resized: old size 131072, new size 204800 00:07:18.753 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.753 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:18.753 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.753 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.753 [2024-11-21 04:05:18.620343] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:18.753 [2024-11-21 04:05:18.620366] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '3f9b0ee9-7838-4415-b571-72c39f7116ce' was resized: old size 131072, new size 204800 00:07:18.753 [2024-11-21 04:05:18.620394] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:18.753 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.753 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:18.753 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.753 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.753 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:18.753 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.753 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:18.753 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:18.753 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.753 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.753 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:18.753 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.753 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.013 [2024-11-21 04:05:18.732263] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.013 [2024-11-21 04:05:18.775979] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:19.013 [2024-11-21 04:05:18.776105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:19.013 [2024-11-21 04:05:18.776138] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:19.013 [2024-11-21 04:05:18.776178] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:19.013 [2024-11-21 04:05:18.776360] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.013 [2024-11-21 04:05:18.776431] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:19.013 [2024-11-21 04:05:18.776502] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.013 [2024-11-21 04:05:18.787902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:19.013 [2024-11-21 04:05:18.788017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:19.013 [2024-11-21 04:05:18.788050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:19.013 [2024-11-21 04:05:18.788063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:19.013 [2024-11-21 04:05:18.790666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:19.013 [2024-11-21 04:05:18.790704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:19.013 [2024-11-21 04:05:18.792289] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 0c9c9033-1887-4570-89f9-91b43e289807 00:07:19.013 [2024-11-21 04:05:18.792358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0c9c9033-1887-4570-89f9-91b43e289807 is claimed 00:07:19.013 [2024-11-21 04:05:18.792446] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 3f9b0ee9-7838-4415-b571-72c39f7116ce 00:07:19.013 [2024-11-21 04:05:18.792472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3f9b0ee9-7838-4415-b571-72c39f7116ce is claimed 00:07:19.013 [2024-11-21 04:05:18.792596] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 3f9b0ee9-7838-4415-b571-72c39f7116ce (2) smaller than existing raid bdev Raid (3) 00:07:19.013 [2024-11-21 04:05:18.792630] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 0c9c9033-1887-4570-89f9-91b43e289807: File exists 00:07:19.013 [2024-11-21 04:05:18.792679] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:07:19.013 [2024-11-21 04:05:18.792690] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:19.013 [2024-11-21 04:05:18.792940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:07:19.013 pt0 00:07:19.013 [2024-11-21 04:05:18.793128] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:07:19.013 [2024-11-21 04:05:18.793145] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001580 00:07:19.013 [2024-11-21 04:05:18.793289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.013 [2024-11-21 04:05:18.816545] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:19.013 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:19.014 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:19.014 04:05:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71591 00:07:19.014 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 71591 ']' 00:07:19.014 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 71591 00:07:19.014 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:19.014 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.014 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71591 00:07:19.014 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.014 killing process with pid 71591 00:07:19.014 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.014 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71591' 00:07:19.014 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 71591 00:07:19.014 [2024-11-21 04:05:18.896973] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:19.014 [2024-11-21 04:05:18.897052] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.014 [2024-11-21 04:05:18.897097] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:19.014 [2024-11-21 04:05:18.897105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Raid, state offline 00:07:19.014 04:05:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 71591 00:07:19.274 [2024-11-21 04:05:19.203863] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:19.843 ************************************ 00:07:19.843 END TEST raid0_resize_superblock_test 00:07:19.843 ************************************ 00:07:19.843 04:05:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:19.843 00:07:19.843 real 0m2.402s 00:07:19.843 user 0m2.551s 00:07:19.843 sys 0m0.621s 00:07:19.843 04:05:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.843 04:05:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.843 04:05:19 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:19.843 04:05:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:19.843 04:05:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.843 04:05:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:19.843 ************************************ 00:07:19.843 START TEST raid1_resize_superblock_test 00:07:19.843 ************************************ 00:07:19.843 04:05:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:07:19.843 04:05:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:19.843 04:05:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71668 00:07:19.843 Process raid pid: 71668 00:07:19.843 04:05:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:19.843 04:05:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71668' 00:07:19.843 04:05:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71668 00:07:19.843 04:05:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 71668 ']' 00:07:19.843 04:05:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.843 04:05:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.844 04:05:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.844 04:05:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.844 04:05:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.844 [2024-11-21 04:05:19.684426] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:19.844 [2024-11-21 04:05:19.684559] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.107 [2024-11-21 04:05:19.840783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.107 [2024-11-21 04:05:19.884794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.107 [2024-11-21 04:05:19.962394] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.107 [2024-11-21 04:05:19.962448] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.678 04:05:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.678 04:05:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:20.678 04:05:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:20.678 04:05:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.678 04:05:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.938 malloc0 00:07:20.938 04:05:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.938 04:05:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:20.938 04:05:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.938 04:05:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.938 [2024-11-21 04:05:20.711188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:20.938 [2024-11-21 04:05:20.711265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.938 [2024-11-21 04:05:20.711289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:20.938 [2024-11-21 04:05:20.711300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.938 [2024-11-21 04:05:20.713767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.938 [2024-11-21 04:05:20.713881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:20.938 pt0 00:07:20.938 04:05:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.938 04:05:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:20.938 04:05:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.938 04:05:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.938 f674dfd5-73a2-4905-9675-226fe77d5173 00:07:20.938 04:05:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.938 04:05:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:20.938 04:05:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.938 04:05:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.938 5106984a-7b28-4885-8c22-fb2840950fb6 00:07:20.938 04:05:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.938 04:05:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:20.938 04:05:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.938 04:05:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.198 e744eeab-0d5a-4f4f-ad71-2755d13ab36f 00:07:21.198 04:05:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.199 04:05:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:21.199 04:05:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:21.199 04:05:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.199 04:05:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.199 [2024-11-21 04:05:20.915851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5106984a-7b28-4885-8c22-fb2840950fb6 is claimed 00:07:21.199 [2024-11-21 04:05:20.916118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e744eeab-0d5a-4f4f-ad71-2755d13ab36f is claimed 00:07:21.199 [2024-11-21 04:05:20.916282] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:21.199 [2024-11-21 04:05:20.916300] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:21.199 [2024-11-21 04:05:20.916647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:21.199 [2024-11-21 04:05:20.916818] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:21.199 [2024-11-21 04:05:20.916829] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:07:21.199 [2024-11-21 04:05:20.917005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:21.199 04:05:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.199 04:05:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:21.199 04:05:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:21.199 04:05:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.199 04:05:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.199 04:05:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.199 04:05:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:21.199 04:05:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:21.199 04:05:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:21.199 04:05:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.199 04:05:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.199 04:05:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:21.199 [2024-11-21 04:05:21.027922] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.199 [2024-11-21 04:05:21.075761] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:21.199 [2024-11-21 04:05:21.075797] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '5106984a-7b28-4885-8c22-fb2840950fb6' was resized: old size 131072, new size 204800 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.199 [2024-11-21 04:05:21.087625] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:21.199 [2024-11-21 04:05:21.087647] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'e744eeab-0d5a-4f4f-ad71-2755d13ab36f' was resized: old size 131072, new size 204800 00:07:21.199 [2024-11-21 04:05:21.087676] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.199 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.460 [2024-11-21 04:05:21.203536] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.460 [2024-11-21 04:05:21.231280] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:21.460 [2024-11-21 04:05:21.231355] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:21.460 [2024-11-21 04:05:21.231391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:21.460 [2024-11-21 04:05:21.231560] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:21.460 [2024-11-21 04:05:21.231716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.460 [2024-11-21 04:05:21.231781] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:21.460 [2024-11-21 04:05:21.231796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.460 [2024-11-21 04:05:21.243230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:21.460 [2024-11-21 04:05:21.243291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:21.460 [2024-11-21 04:05:21.243312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:21.460 [2024-11-21 04:05:21.243323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:21.460 [2024-11-21 04:05:21.245807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:21.460 [2024-11-21 04:05:21.245845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:21.460 [2024-11-21 04:05:21.247371] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 5106984a-7b28-4885-8c22-fb2840950fb6 00:07:21.460 [2024-11-21 04:05:21.247433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5106984a-7b28-4885-8c22-fb2840950fb6 is claimed 00:07:21.460 [2024-11-21 04:05:21.247515] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev e744eeab-0d5a-4f4f-ad71-2755d13ab36f 00:07:21.460 [2024-11-21 04:05:21.247535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e744eeab-0d5a-4f4f-ad71-2755d13ab36f is claimed 00:07:21.460 [2024-11-21 04:05:21.247665] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev e744eeab-0d5a-4f4f-ad71-2755d13ab36f (2) smaller than existing raid bdev Raid (3) 00:07:21.460 [2024-11-21 04:05:21.247688] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 5106984a-7b28-4885-8c22-fb2840950fb6: File exists 00:07:21.460 [2024-11-21 04:05:21.247726] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:07:21.460 [2024-11-21 04:05:21.247736] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:21.460 [2024-11-21 04:05:21.247984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:07:21.460 pt0 00:07:21.460 [2024-11-21 04:05:21.248179] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:07:21.460 [2024-11-21 04:05:21.248196] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001580 00:07:21.460 [2024-11-21 04:05:21.248349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.460 [2024-11-21 04:05:21.271602] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71668 00:07:21.460 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 71668 ']' 00:07:21.461 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 71668 00:07:21.461 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:21.461 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.461 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71668 00:07:21.461 killing process with pid 71668 00:07:21.461 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.461 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.461 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71668' 00:07:21.461 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 71668 00:07:21.461 [2024-11-21 04:05:21.353365] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:21.461 [2024-11-21 04:05:21.353468] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.461 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 71668 00:07:21.461 [2024-11-21 04:05:21.353527] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:21.461 [2024-11-21 04:05:21.353537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Raid, state offline 00:07:21.720 [2024-11-21 04:05:21.661123] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:22.290 04:05:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:22.290 00:07:22.290 real 0m2.389s 00:07:22.290 user 0m2.487s 00:07:22.290 sys 0m0.674s 00:07:22.290 ************************************ 00:07:22.290 END TEST raid1_resize_superblock_test 00:07:22.290 ************************************ 00:07:22.291 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.291 04:05:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.291 04:05:22 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:22.291 04:05:22 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:22.291 04:05:22 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:22.291 04:05:22 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:22.291 04:05:22 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:22.291 04:05:22 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:22.291 04:05:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:22.291 04:05:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.291 04:05:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:22.291 ************************************ 00:07:22.291 START TEST raid_function_test_raid0 00:07:22.291 ************************************ 00:07:22.291 04:05:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:07:22.291 04:05:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:22.291 04:05:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:22.291 04:05:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:22.291 04:05:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=71748 00:07:22.291 04:05:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:22.291 Process raid pid: 71748 00:07:22.291 04:05:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71748' 00:07:22.291 04:05:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 71748 00:07:22.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.291 04:05:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 71748 ']' 00:07:22.291 04:05:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.291 04:05:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.291 04:05:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.291 04:05:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.291 04:05:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:22.291 [2024-11-21 04:05:22.162439] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:22.291 [2024-11-21 04:05:22.162581] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.550 [2024-11-21 04:05:22.319513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.550 [2024-11-21 04:05:22.360314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.550 [2024-11-21 04:05:22.437516] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.550 [2024-11-21 04:05:22.437663] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.121 04:05:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.121 04:05:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:07:23.121 04:05:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:23.121 04:05:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.121 04:05:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:23.121 Base_1 00:07:23.121 04:05:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.121 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:23.121 04:05:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.121 04:05:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:23.121 Base_2 00:07:23.121 04:05:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.121 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:23.121 04:05:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.121 04:05:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:23.121 [2024-11-21 04:05:23.035400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:23.121 [2024-11-21 04:05:23.037768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:23.121 [2024-11-21 04:05:23.037853] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:23.121 [2024-11-21 04:05:23.037865] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:23.121 [2024-11-21 04:05:23.038266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:23.121 [2024-11-21 04:05:23.038547] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:23.121 [2024-11-21 04:05:23.038567] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000001200 00:07:23.121 [2024-11-21 04:05:23.038728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.121 04:05:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.121 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:23.121 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:23.121 04:05:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.121 04:05:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:23.121 04:05:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.121 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:23.121 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:23.121 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:23.121 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:23.121 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:23.121 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:23.121 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:23.121 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:23.121 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:23.121 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:23.121 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:23.121 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:23.381 [2024-11-21 04:05:23.275087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:23.381 /dev/nbd0 00:07:23.381 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:23.381 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:23.381 04:05:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:23.381 04:05:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:07:23.381 04:05:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:23.381 04:05:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:23.381 04:05:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:23.381 04:05:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:07:23.381 04:05:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:23.381 04:05:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:23.381 04:05:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.381 1+0 records in 00:07:23.381 1+0 records out 00:07:23.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396351 s, 10.3 MB/s 00:07:23.381 04:05:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.381 04:05:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:07:23.381 04:05:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.381 04:05:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:23.381 04:05:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:07:23.381 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.381 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:23.381 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:23.642 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:23.642 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:23.642 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:23.642 { 00:07:23.642 "nbd_device": "/dev/nbd0", 00:07:23.642 "bdev_name": "raid" 00:07:23.642 } 00:07:23.642 ]' 00:07:23.642 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:23.642 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:23.642 { 00:07:23.642 "nbd_device": "/dev/nbd0", 00:07:23.642 "bdev_name": "raid" 00:07:23.642 } 00:07:23.642 ]' 00:07:23.642 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:23.642 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:23.642 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:23.903 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:23.903 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:23.903 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:23.903 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:23.903 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:23.903 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:23.903 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:23.903 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:23.903 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:23.903 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:23.903 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:23.903 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:23.903 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:23.903 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:23.903 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:23.903 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:23.903 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:23.903 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:23.903 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:23.903 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:23.903 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:23.903 4096+0 records in 00:07:23.903 4096+0 records out 00:07:23.903 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0325271 s, 64.5 MB/s 00:07:23.903 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:24.163 4096+0 records in 00:07:24.163 4096+0 records out 00:07:24.163 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.221323 s, 9.5 MB/s 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:24.163 128+0 records in 00:07:24.163 128+0 records out 00:07:24.163 65536 bytes (66 kB, 64 KiB) copied, 0.000532791 s, 123 MB/s 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:24.163 2035+0 records in 00:07:24.163 2035+0 records out 00:07:24.163 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0137281 s, 75.9 MB/s 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:24.163 456+0 records in 00:07:24.163 456+0 records out 00:07:24.163 233472 bytes (233 kB, 228 KiB) copied, 0.00194549 s, 120 MB/s 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.163 04:05:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:24.424 04:05:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:24.424 [2024-11-21 04:05:24.193325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.424 04:05:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:24.424 04:05:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:24.424 04:05:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.424 04:05:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.424 04:05:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:24.424 04:05:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:24.424 04:05:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.424 04:05:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:24.424 04:05:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:24.424 04:05:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:24.684 04:05:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:24.684 04:05:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:24.684 04:05:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:24.684 04:05:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:24.684 04:05:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:24.684 04:05:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:24.684 04:05:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:24.684 04:05:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:24.684 04:05:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:24.684 04:05:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:24.684 04:05:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:24.684 04:05:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 71748 00:07:24.684 04:05:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 71748 ']' 00:07:24.684 04:05:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 71748 00:07:24.684 04:05:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:24.684 04:05:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.684 04:05:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71748 00:07:24.684 killing process with pid 71748 00:07:24.684 04:05:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.684 04:05:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.684 04:05:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71748' 00:07:24.684 04:05:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 71748 00:07:24.684 [2024-11-21 04:05:24.514495] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:24.684 04:05:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 71748 00:07:24.684 [2024-11-21 04:05:24.514651] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.684 [2024-11-21 04:05:24.514722] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:24.684 [2024-11-21 04:05:24.514737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid, state offline 00:07:24.684 [2024-11-21 04:05:24.557185] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:24.945 04:05:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:24.945 00:07:24.945 real 0m2.804s 00:07:24.945 user 0m3.346s 00:07:24.945 sys 0m0.988s 00:07:24.945 04:05:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.945 04:05:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:24.945 ************************************ 00:07:24.945 END TEST raid_function_test_raid0 00:07:24.945 ************************************ 00:07:25.205 04:05:24 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:25.205 04:05:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:25.205 04:05:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.205 04:05:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:25.205 ************************************ 00:07:25.205 START TEST raid_function_test_concat 00:07:25.205 ************************************ 00:07:25.205 04:05:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:25.205 04:05:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:25.205 04:05:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:25.205 04:05:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:25.205 04:05:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=71861 00:07:25.205 04:05:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:25.205 Process raid pid: 71861 00:07:25.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.205 04:05:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71861' 00:07:25.205 04:05:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 71861 00:07:25.205 04:05:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 71861 ']' 00:07:25.205 04:05:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.205 04:05:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.205 04:05:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.205 04:05:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.205 04:05:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:25.205 [2024-11-21 04:05:25.036721] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:25.205 [2024-11-21 04:05:25.036993] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.465 [2024-11-21 04:05:25.194186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.465 [2024-11-21 04:05:25.233511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.465 [2024-11-21 04:05:25.309611] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.465 [2024-11-21 04:05:25.309757] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:26.039 Base_1 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:26.039 Base_2 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:26.039 [2024-11-21 04:05:25.910863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:26.039 [2024-11-21 04:05:25.913068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:26.039 [2024-11-21 04:05:25.913184] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:26.039 [2024-11-21 04:05:25.913250] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:26.039 [2024-11-21 04:05:25.913601] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:26.039 [2024-11-21 04:05:25.913787] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:26.039 [2024-11-21 04:05:25.913831] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000001200 00:07:26.039 [2024-11-21 04:05:25.914031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:26.039 04:05:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:26.314 [2024-11-21 04:05:26.162471] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:26.314 /dev/nbd0 00:07:26.314 04:05:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:26.314 04:05:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:26.314 04:05:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:26.314 04:05:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:26.314 04:05:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:26.314 04:05:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:26.314 04:05:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:26.314 04:05:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:26.314 04:05:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:26.314 04:05:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:26.314 04:05:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:26.314 1+0 records in 00:07:26.314 1+0 records out 00:07:26.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612299 s, 6.7 MB/s 00:07:26.314 04:05:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.314 04:05:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:26.314 04:05:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.314 04:05:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:26.314 04:05:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:26.314 04:05:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:26.314 04:05:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:26.314 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:26.314 04:05:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:26.314 04:05:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:26.574 04:05:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:26.574 { 00:07:26.574 "nbd_device": "/dev/nbd0", 00:07:26.574 "bdev_name": "raid" 00:07:26.574 } 00:07:26.574 ]' 00:07:26.574 04:05:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:26.574 { 00:07:26.574 "nbd_device": "/dev/nbd0", 00:07:26.574 "bdev_name": "raid" 00:07:26.574 } 00:07:26.574 ]' 00:07:26.574 04:05:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:26.574 04:05:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:26.574 04:05:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:26.574 04:05:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:26.574 04:05:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:26.574 04:05:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:26.574 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:26.574 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:26.574 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:26.574 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:26.574 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:26.574 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:26.574 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:26.574 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:26.574 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:26.574 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:26.574 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:26.574 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:26.574 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:26.575 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:26.575 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:26.575 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:26.575 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:26.575 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:26.575 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:26.834 4096+0 records in 00:07:26.834 4096+0 records out 00:07:26.834 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0319648 s, 65.6 MB/s 00:07:26.834 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:26.834 4096+0 records in 00:07:26.834 4096+0 records out 00:07:26.834 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.219721 s, 9.5 MB/s 00:07:26.834 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:26.834 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:26.834 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:26.834 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:26.834 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:26.834 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:26.834 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:26.834 128+0 records in 00:07:26.834 128+0 records out 00:07:26.834 65536 bytes (66 kB, 64 KiB) copied, 0.00144023 s, 45.5 MB/s 00:07:26.834 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:26.834 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:27.094 2035+0 records in 00:07:27.094 2035+0 records out 00:07:27.094 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0143933 s, 72.4 MB/s 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:27.094 456+0 records in 00:07:27.094 456+0 records out 00:07:27.094 233472 bytes (233 kB, 228 KiB) copied, 0.00391878 s, 59.6 MB/s 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.094 04:05:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:27.354 04:05:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:27.354 [2024-11-21 04:05:27.089499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.354 04:05:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:27.354 04:05:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:27.354 04:05:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.354 04:05:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.354 04:05:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:27.354 04:05:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:27.354 04:05:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.354 04:05:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:27.354 04:05:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:27.354 04:05:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:27.354 04:05:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:27.354 04:05:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:27.354 04:05:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:27.614 04:05:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:27.614 04:05:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:27.614 04:05:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:27.614 04:05:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:27.614 04:05:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:27.614 04:05:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:27.614 04:05:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:27.614 04:05:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:27.614 04:05:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 71861 00:07:27.614 04:05:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 71861 ']' 00:07:27.614 04:05:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 71861 00:07:27.614 04:05:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:27.614 04:05:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.614 04:05:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71861 00:07:27.614 killing process with pid 71861 00:07:27.614 04:05:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.614 04:05:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.614 04:05:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71861' 00:07:27.614 04:05:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 71861 00:07:27.614 [2024-11-21 04:05:27.403727] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:27.614 04:05:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 71861 00:07:27.614 [2024-11-21 04:05:27.403876] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.614 [2024-11-21 04:05:27.403945] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:27.614 [2024-11-21 04:05:27.403961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid, state offline 00:07:27.614 [2024-11-21 04:05:27.447334] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:27.874 ************************************ 00:07:27.874 END TEST raid_function_test_concat 00:07:27.874 ************************************ 00:07:27.874 04:05:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:27.874 00:07:27.874 real 0m2.825s 00:07:27.874 user 0m3.371s 00:07:27.874 sys 0m0.999s 00:07:27.874 04:05:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.874 04:05:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:27.874 04:05:27 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:27.874 04:05:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:27.875 04:05:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.875 04:05:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:27.875 ************************************ 00:07:27.875 START TEST raid0_resize_test 00:07:27.875 ************************************ 00:07:27.875 04:05:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:28.135 04:05:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:28.135 04:05:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:28.135 04:05:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:28.135 04:05:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:28.135 04:05:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:28.135 04:05:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:28.135 04:05:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:28.135 04:05:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:28.135 04:05:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=71978 00:07:28.135 04:05:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:28.135 04:05:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 71978' 00:07:28.135 Process raid pid: 71978 00:07:28.135 04:05:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 71978 00:07:28.135 04:05:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 71978 ']' 00:07:28.135 04:05:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.135 04:05:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.135 04:05:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.135 04:05:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.135 04:05:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.135 [2024-11-21 04:05:27.933648] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:28.135 [2024-11-21 04:05:27.933904] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.135 [2024-11-21 04:05:28.090290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.395 [2024-11-21 04:05:28.130680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.395 [2024-11-21 04:05:28.206693] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.395 [2024-11-21 04:05:28.206837] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.965 Base_1 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.965 Base_2 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.965 [2024-11-21 04:05:28.774309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:28.965 [2024-11-21 04:05:28.776493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:28.965 [2024-11-21 04:05:28.776548] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:28.965 [2024-11-21 04:05:28.776558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:28.965 [2024-11-21 04:05:28.776852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:07:28.965 [2024-11-21 04:05:28.776954] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:28.965 [2024-11-21 04:05:28.776963] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:07:28.965 [2024-11-21 04:05:28.777076] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.965 [2024-11-21 04:05:28.782283] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:28.965 [2024-11-21 04:05:28.782314] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:28.965 true 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:28.965 [2024-11-21 04:05:28.794448] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.965 [2024-11-21 04:05:28.846125] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:28.965 [2024-11-21 04:05:28.846191] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:28.965 [2024-11-21 04:05:28.846242] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:28.965 true 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.965 [2024-11-21 04:05:28.862306] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 71978 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 71978 ']' 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 71978 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71978 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71978' 00:07:28.965 killing process with pid 71978 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 71978 00:07:28.965 [2024-11-21 04:05:28.927369] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:28.965 [2024-11-21 04:05:28.927516] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.965 04:05:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 71978 00:07:28.965 [2024-11-21 04:05:28.927601] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:28.965 [2024-11-21 04:05:28.927621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:07:28.965 [2024-11-21 04:05:28.929785] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:29.536 04:05:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:29.536 00:07:29.536 real 0m1.409s 00:07:29.536 user 0m1.489s 00:07:29.536 sys 0m0.379s 00:07:29.536 04:05:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.536 04:05:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.536 ************************************ 00:07:29.536 END TEST raid0_resize_test 00:07:29.536 ************************************ 00:07:29.536 04:05:29 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:29.536 04:05:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:29.536 04:05:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.536 04:05:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:29.536 ************************************ 00:07:29.536 START TEST raid1_resize_test 00:07:29.536 ************************************ 00:07:29.536 04:05:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:29.536 04:05:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:29.536 04:05:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:29.536 04:05:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:29.536 04:05:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:29.536 04:05:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:29.536 04:05:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:29.536 04:05:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:29.536 04:05:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:29.536 04:05:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72023 00:07:29.536 04:05:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:29.536 Process raid pid: 72023 00:07:29.536 04:05:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72023' 00:07:29.536 04:05:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72023 00:07:29.536 04:05:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 72023 ']' 00:07:29.536 04:05:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.536 04:05:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.536 04:05:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.536 04:05:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.536 04:05:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.536 [2024-11-21 04:05:29.406849] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:29.536 [2024-11-21 04:05:29.407027] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.796 [2024-11-21 04:05:29.564055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.796 [2024-11-21 04:05:29.603602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.796 [2024-11-21 04:05:29.680348] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.796 [2024-11-21 04:05:29.680492] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.366 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.366 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:30.366 04:05:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:30.366 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.366 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.366 Base_1 00:07:30.366 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.366 04:05:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:30.366 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.366 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.366 Base_2 00:07:30.366 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.366 04:05:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:30.366 04:05:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:30.366 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.366 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.366 [2024-11-21 04:05:30.275488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:30.366 [2024-11-21 04:05:30.277719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:30.366 [2024-11-21 04:05:30.277830] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:30.366 [2024-11-21 04:05:30.277870] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:30.366 [2024-11-21 04:05:30.278256] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:07:30.366 [2024-11-21 04:05:30.278421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:30.366 [2024-11-21 04:05:30.278461] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:07:30.366 [2024-11-21 04:05:30.278653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.366 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.366 04:05:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:30.366 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.366 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.366 [2024-11-21 04:05:30.287453] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:30.366 [2024-11-21 04:05:30.287526] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:30.366 true 00:07:30.366 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.366 04:05:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:30.366 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.366 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.366 04:05:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:30.366 [2024-11-21 04:05:30.303625] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.366 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.626 [2024-11-21 04:05:30.347346] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:30.626 [2024-11-21 04:05:30.347411] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:30.626 [2024-11-21 04:05:30.347470] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:30.626 true 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:30.626 [2024-11-21 04:05:30.359548] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72023 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 72023 ']' 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 72023 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72023 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.626 killing process with pid 72023 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72023' 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 72023 00:07:30.626 [2024-11-21 04:05:30.446455] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:30.626 [2024-11-21 04:05:30.446560] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.626 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 72023 00:07:30.626 [2024-11-21 04:05:30.447059] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:30.626 [2024-11-21 04:05:30.447081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:07:30.626 [2024-11-21 04:05:30.448896] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:30.886 04:05:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:30.886 00:07:30.886 real 0m1.452s 00:07:30.886 user 0m1.547s 00:07:30.886 sys 0m0.367s 00:07:30.886 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.886 ************************************ 00:07:30.886 END TEST raid1_resize_test 00:07:30.886 ************************************ 00:07:30.886 04:05:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.886 04:05:30 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:30.886 04:05:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:30.886 04:05:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:30.886 04:05:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:30.886 04:05:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.886 04:05:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:30.886 ************************************ 00:07:30.886 START TEST raid_state_function_test 00:07:30.886 ************************************ 00:07:30.886 04:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:30.887 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:30.887 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:30.887 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:30.887 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:30.887 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:30.887 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:30.887 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:30.887 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:30.887 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:30.887 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:30.887 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:30.887 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:30.887 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:30.887 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:30.887 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:30.887 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:30.887 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:30.887 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:30.887 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:30.887 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:30.887 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:30.887 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:30.887 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:30.887 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72080 00:07:30.887 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:31.147 Process raid pid: 72080 00:07:31.147 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72080' 00:07:31.147 04:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72080 00:07:31.147 04:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 72080 ']' 00:07:31.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.147 04:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.147 04:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.147 04:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.147 04:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.147 04:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.147 [2024-11-21 04:05:30.942848] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:31.147 [2024-11-21 04:05:30.943074] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.147 [2024-11-21 04:05:31.096351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.406 [2024-11-21 04:05:31.137169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.406 [2024-11-21 04:05:31.214103] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.406 [2024-11-21 04:05:31.214143] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.976 04:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.976 04:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:31.976 04:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:31.976 04:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.976 04:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.976 [2024-11-21 04:05:31.790269] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:31.976 [2024-11-21 04:05:31.790330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:31.976 [2024-11-21 04:05:31.790348] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:31.976 [2024-11-21 04:05:31.790360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:31.976 04:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.976 04:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:31.976 04:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:31.976 04:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:31.976 04:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.976 04:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.976 04:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.976 04:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.976 04:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.976 04:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.976 04:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.976 04:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.976 04:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.976 04:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.976 04:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.976 04:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.976 04:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.976 "name": "Existed_Raid", 00:07:31.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.976 "strip_size_kb": 64, 00:07:31.976 "state": "configuring", 00:07:31.976 "raid_level": "raid0", 00:07:31.976 "superblock": false, 00:07:31.976 "num_base_bdevs": 2, 00:07:31.976 "num_base_bdevs_discovered": 0, 00:07:31.976 "num_base_bdevs_operational": 2, 00:07:31.976 "base_bdevs_list": [ 00:07:31.976 { 00:07:31.976 "name": "BaseBdev1", 00:07:31.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.976 "is_configured": false, 00:07:31.976 "data_offset": 0, 00:07:31.976 "data_size": 0 00:07:31.976 }, 00:07:31.976 { 00:07:31.976 "name": "BaseBdev2", 00:07:31.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.976 "is_configured": false, 00:07:31.976 "data_offset": 0, 00:07:31.976 "data_size": 0 00:07:31.976 } 00:07:31.976 ] 00:07:31.976 }' 00:07:31.976 04:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.976 04:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.546 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:32.546 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.546 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.546 [2024-11-21 04:05:32.293361] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:32.546 [2024-11-21 04:05:32.293474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:32.546 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.546 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:32.546 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.546 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.546 [2024-11-21 04:05:32.305337] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:32.546 [2024-11-21 04:05:32.305422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:32.546 [2024-11-21 04:05:32.305449] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:32.546 [2024-11-21 04:05:32.305486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:32.546 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.546 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:32.546 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.546 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.546 [2024-11-21 04:05:32.332850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:32.546 BaseBdev1 00:07:32.546 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.546 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:32.546 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:32.546 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:32.546 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:32.546 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:32.546 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:32.546 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:32.546 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.546 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.546 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.546 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:32.546 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.546 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.546 [ 00:07:32.546 { 00:07:32.546 "name": "BaseBdev1", 00:07:32.546 "aliases": [ 00:07:32.546 "3915f875-8c5b-4b9d-b01e-1c68bee4c4c9" 00:07:32.546 ], 00:07:32.546 "product_name": "Malloc disk", 00:07:32.546 "block_size": 512, 00:07:32.546 "num_blocks": 65536, 00:07:32.546 "uuid": "3915f875-8c5b-4b9d-b01e-1c68bee4c4c9", 00:07:32.546 "assigned_rate_limits": { 00:07:32.546 "rw_ios_per_sec": 0, 00:07:32.546 "rw_mbytes_per_sec": 0, 00:07:32.546 "r_mbytes_per_sec": 0, 00:07:32.546 "w_mbytes_per_sec": 0 00:07:32.546 }, 00:07:32.546 "claimed": true, 00:07:32.546 "claim_type": "exclusive_write", 00:07:32.546 "zoned": false, 00:07:32.546 "supported_io_types": { 00:07:32.546 "read": true, 00:07:32.546 "write": true, 00:07:32.547 "unmap": true, 00:07:32.547 "flush": true, 00:07:32.547 "reset": true, 00:07:32.547 "nvme_admin": false, 00:07:32.547 "nvme_io": false, 00:07:32.547 "nvme_io_md": false, 00:07:32.547 "write_zeroes": true, 00:07:32.547 "zcopy": true, 00:07:32.547 "get_zone_info": false, 00:07:32.547 "zone_management": false, 00:07:32.547 "zone_append": false, 00:07:32.547 "compare": false, 00:07:32.547 "compare_and_write": false, 00:07:32.547 "abort": true, 00:07:32.547 "seek_hole": false, 00:07:32.547 "seek_data": false, 00:07:32.547 "copy": true, 00:07:32.547 "nvme_iov_md": false 00:07:32.547 }, 00:07:32.547 "memory_domains": [ 00:07:32.547 { 00:07:32.547 "dma_device_id": "system", 00:07:32.547 "dma_device_type": 1 00:07:32.547 }, 00:07:32.547 { 00:07:32.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.547 "dma_device_type": 2 00:07:32.547 } 00:07:32.547 ], 00:07:32.547 "driver_specific": {} 00:07:32.547 } 00:07:32.547 ] 00:07:32.547 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.547 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:32.547 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:32.547 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.547 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.547 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.547 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.547 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.547 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.547 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.547 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.547 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.547 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.547 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.547 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.547 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.547 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.547 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.547 "name": "Existed_Raid", 00:07:32.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.547 "strip_size_kb": 64, 00:07:32.547 "state": "configuring", 00:07:32.547 "raid_level": "raid0", 00:07:32.547 "superblock": false, 00:07:32.547 "num_base_bdevs": 2, 00:07:32.547 "num_base_bdevs_discovered": 1, 00:07:32.547 "num_base_bdevs_operational": 2, 00:07:32.547 "base_bdevs_list": [ 00:07:32.547 { 00:07:32.547 "name": "BaseBdev1", 00:07:32.547 "uuid": "3915f875-8c5b-4b9d-b01e-1c68bee4c4c9", 00:07:32.547 "is_configured": true, 00:07:32.547 "data_offset": 0, 00:07:32.547 "data_size": 65536 00:07:32.547 }, 00:07:32.547 { 00:07:32.547 "name": "BaseBdev2", 00:07:32.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.547 "is_configured": false, 00:07:32.547 "data_offset": 0, 00:07:32.547 "data_size": 0 00:07:32.547 } 00:07:32.547 ] 00:07:32.547 }' 00:07:32.547 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.547 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.116 [2024-11-21 04:05:32.808191] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:33.116 [2024-11-21 04:05:32.808285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.116 [2024-11-21 04:05:32.820203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:33.116 [2024-11-21 04:05:32.822472] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:33.116 [2024-11-21 04:05:32.822516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.116 "name": "Existed_Raid", 00:07:33.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.116 "strip_size_kb": 64, 00:07:33.116 "state": "configuring", 00:07:33.116 "raid_level": "raid0", 00:07:33.116 "superblock": false, 00:07:33.116 "num_base_bdevs": 2, 00:07:33.116 "num_base_bdevs_discovered": 1, 00:07:33.116 "num_base_bdevs_operational": 2, 00:07:33.116 "base_bdevs_list": [ 00:07:33.116 { 00:07:33.116 "name": "BaseBdev1", 00:07:33.116 "uuid": "3915f875-8c5b-4b9d-b01e-1c68bee4c4c9", 00:07:33.116 "is_configured": true, 00:07:33.116 "data_offset": 0, 00:07:33.116 "data_size": 65536 00:07:33.116 }, 00:07:33.116 { 00:07:33.116 "name": "BaseBdev2", 00:07:33.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.116 "is_configured": false, 00:07:33.116 "data_offset": 0, 00:07:33.116 "data_size": 0 00:07:33.116 } 00:07:33.116 ] 00:07:33.116 }' 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.116 04:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.376 [2024-11-21 04:05:33.260717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:33.376 [2024-11-21 04:05:33.260767] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:33.376 [2024-11-21 04:05:33.260777] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:33.376 [2024-11-21 04:05:33.261087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:33.376 [2024-11-21 04:05:33.261284] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:33.376 [2024-11-21 04:05:33.261302] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:33.376 [2024-11-21 04:05:33.261561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.376 BaseBdev2 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.376 [ 00:07:33.376 { 00:07:33.376 "name": "BaseBdev2", 00:07:33.376 "aliases": [ 00:07:33.376 "ea5cdb85-43e6-483a-9264-ea35376842b5" 00:07:33.376 ], 00:07:33.376 "product_name": "Malloc disk", 00:07:33.376 "block_size": 512, 00:07:33.376 "num_blocks": 65536, 00:07:33.376 "uuid": "ea5cdb85-43e6-483a-9264-ea35376842b5", 00:07:33.376 "assigned_rate_limits": { 00:07:33.376 "rw_ios_per_sec": 0, 00:07:33.376 "rw_mbytes_per_sec": 0, 00:07:33.376 "r_mbytes_per_sec": 0, 00:07:33.376 "w_mbytes_per_sec": 0 00:07:33.376 }, 00:07:33.376 "claimed": true, 00:07:33.376 "claim_type": "exclusive_write", 00:07:33.376 "zoned": false, 00:07:33.376 "supported_io_types": { 00:07:33.376 "read": true, 00:07:33.376 "write": true, 00:07:33.376 "unmap": true, 00:07:33.376 "flush": true, 00:07:33.376 "reset": true, 00:07:33.376 "nvme_admin": false, 00:07:33.376 "nvme_io": false, 00:07:33.376 "nvme_io_md": false, 00:07:33.376 "write_zeroes": true, 00:07:33.376 "zcopy": true, 00:07:33.376 "get_zone_info": false, 00:07:33.376 "zone_management": false, 00:07:33.376 "zone_append": false, 00:07:33.376 "compare": false, 00:07:33.376 "compare_and_write": false, 00:07:33.376 "abort": true, 00:07:33.376 "seek_hole": false, 00:07:33.376 "seek_data": false, 00:07:33.376 "copy": true, 00:07:33.376 "nvme_iov_md": false 00:07:33.376 }, 00:07:33.376 "memory_domains": [ 00:07:33.376 { 00:07:33.376 "dma_device_id": "system", 00:07:33.376 "dma_device_type": 1 00:07:33.376 }, 00:07:33.376 { 00:07:33.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.376 "dma_device_type": 2 00:07:33.376 } 00:07:33.376 ], 00:07:33.376 "driver_specific": {} 00:07:33.376 } 00:07:33.376 ] 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.376 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.636 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.636 "name": "Existed_Raid", 00:07:33.636 "uuid": "9275fb3b-9370-46bf-a4d9-38e05d9b1b90", 00:07:33.636 "strip_size_kb": 64, 00:07:33.636 "state": "online", 00:07:33.636 "raid_level": "raid0", 00:07:33.636 "superblock": false, 00:07:33.636 "num_base_bdevs": 2, 00:07:33.636 "num_base_bdevs_discovered": 2, 00:07:33.636 "num_base_bdevs_operational": 2, 00:07:33.636 "base_bdevs_list": [ 00:07:33.636 { 00:07:33.636 "name": "BaseBdev1", 00:07:33.636 "uuid": "3915f875-8c5b-4b9d-b01e-1c68bee4c4c9", 00:07:33.636 "is_configured": true, 00:07:33.636 "data_offset": 0, 00:07:33.636 "data_size": 65536 00:07:33.636 }, 00:07:33.636 { 00:07:33.636 "name": "BaseBdev2", 00:07:33.636 "uuid": "ea5cdb85-43e6-483a-9264-ea35376842b5", 00:07:33.636 "is_configured": true, 00:07:33.636 "data_offset": 0, 00:07:33.636 "data_size": 65536 00:07:33.636 } 00:07:33.636 ] 00:07:33.636 }' 00:07:33.636 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.636 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.896 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:33.896 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:33.896 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:33.896 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:33.896 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:33.896 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:33.896 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:33.896 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:33.896 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.896 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.896 [2024-11-21 04:05:33.736470] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.896 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.896 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:33.896 "name": "Existed_Raid", 00:07:33.896 "aliases": [ 00:07:33.896 "9275fb3b-9370-46bf-a4d9-38e05d9b1b90" 00:07:33.896 ], 00:07:33.896 "product_name": "Raid Volume", 00:07:33.896 "block_size": 512, 00:07:33.896 "num_blocks": 131072, 00:07:33.896 "uuid": "9275fb3b-9370-46bf-a4d9-38e05d9b1b90", 00:07:33.896 "assigned_rate_limits": { 00:07:33.896 "rw_ios_per_sec": 0, 00:07:33.896 "rw_mbytes_per_sec": 0, 00:07:33.896 "r_mbytes_per_sec": 0, 00:07:33.896 "w_mbytes_per_sec": 0 00:07:33.896 }, 00:07:33.896 "claimed": false, 00:07:33.896 "zoned": false, 00:07:33.896 "supported_io_types": { 00:07:33.896 "read": true, 00:07:33.896 "write": true, 00:07:33.896 "unmap": true, 00:07:33.896 "flush": true, 00:07:33.896 "reset": true, 00:07:33.896 "nvme_admin": false, 00:07:33.896 "nvme_io": false, 00:07:33.896 "nvme_io_md": false, 00:07:33.896 "write_zeroes": true, 00:07:33.896 "zcopy": false, 00:07:33.896 "get_zone_info": false, 00:07:33.896 "zone_management": false, 00:07:33.896 "zone_append": false, 00:07:33.896 "compare": false, 00:07:33.896 "compare_and_write": false, 00:07:33.896 "abort": false, 00:07:33.896 "seek_hole": false, 00:07:33.896 "seek_data": false, 00:07:33.896 "copy": false, 00:07:33.896 "nvme_iov_md": false 00:07:33.896 }, 00:07:33.896 "memory_domains": [ 00:07:33.896 { 00:07:33.896 "dma_device_id": "system", 00:07:33.896 "dma_device_type": 1 00:07:33.896 }, 00:07:33.896 { 00:07:33.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.896 "dma_device_type": 2 00:07:33.896 }, 00:07:33.896 { 00:07:33.896 "dma_device_id": "system", 00:07:33.896 "dma_device_type": 1 00:07:33.896 }, 00:07:33.896 { 00:07:33.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.896 "dma_device_type": 2 00:07:33.896 } 00:07:33.896 ], 00:07:33.896 "driver_specific": { 00:07:33.897 "raid": { 00:07:33.897 "uuid": "9275fb3b-9370-46bf-a4d9-38e05d9b1b90", 00:07:33.897 "strip_size_kb": 64, 00:07:33.897 "state": "online", 00:07:33.897 "raid_level": "raid0", 00:07:33.897 "superblock": false, 00:07:33.897 "num_base_bdevs": 2, 00:07:33.897 "num_base_bdevs_discovered": 2, 00:07:33.897 "num_base_bdevs_operational": 2, 00:07:33.897 "base_bdevs_list": [ 00:07:33.897 { 00:07:33.897 "name": "BaseBdev1", 00:07:33.897 "uuid": "3915f875-8c5b-4b9d-b01e-1c68bee4c4c9", 00:07:33.897 "is_configured": true, 00:07:33.897 "data_offset": 0, 00:07:33.897 "data_size": 65536 00:07:33.897 }, 00:07:33.897 { 00:07:33.897 "name": "BaseBdev2", 00:07:33.897 "uuid": "ea5cdb85-43e6-483a-9264-ea35376842b5", 00:07:33.897 "is_configured": true, 00:07:33.897 "data_offset": 0, 00:07:33.897 "data_size": 65536 00:07:33.897 } 00:07:33.897 ] 00:07:33.897 } 00:07:33.897 } 00:07:33.897 }' 00:07:33.897 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:33.897 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:33.897 BaseBdev2' 00:07:33.897 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.897 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:33.897 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.897 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:33.897 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.897 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.897 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.157 [2024-11-21 04:05:33.956212] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:34.157 [2024-11-21 04:05:33.956300] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:34.157 [2024-11-21 04:05:33.956382] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.157 04:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.157 04:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.157 04:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.157 "name": "Existed_Raid", 00:07:34.157 "uuid": "9275fb3b-9370-46bf-a4d9-38e05d9b1b90", 00:07:34.157 "strip_size_kb": 64, 00:07:34.157 "state": "offline", 00:07:34.157 "raid_level": "raid0", 00:07:34.157 "superblock": false, 00:07:34.157 "num_base_bdevs": 2, 00:07:34.157 "num_base_bdevs_discovered": 1, 00:07:34.157 "num_base_bdevs_operational": 1, 00:07:34.157 "base_bdevs_list": [ 00:07:34.157 { 00:07:34.157 "name": null, 00:07:34.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.157 "is_configured": false, 00:07:34.157 "data_offset": 0, 00:07:34.157 "data_size": 65536 00:07:34.157 }, 00:07:34.157 { 00:07:34.157 "name": "BaseBdev2", 00:07:34.157 "uuid": "ea5cdb85-43e6-483a-9264-ea35376842b5", 00:07:34.157 "is_configured": true, 00:07:34.157 "data_offset": 0, 00:07:34.157 "data_size": 65536 00:07:34.157 } 00:07:34.157 ] 00:07:34.157 }' 00:07:34.157 04:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.157 04:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.727 04:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:34.727 04:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:34.727 04:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.727 04:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.727 04:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.727 04:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:34.727 04:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.727 04:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:34.727 04:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:34.727 04:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:34.727 04:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.727 04:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.727 [2024-11-21 04:05:34.524392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:34.727 [2024-11-21 04:05:34.524532] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:34.727 04:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.727 04:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:34.727 04:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:34.727 04:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.727 04:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:34.728 04:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.728 04:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.728 04:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.728 04:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:34.728 04:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:34.728 04:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:34.728 04:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72080 00:07:34.728 04:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 72080 ']' 00:07:34.728 04:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 72080 00:07:34.728 04:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:34.728 04:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.728 04:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72080 00:07:34.728 killing process with pid 72080 00:07:34.728 04:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.728 04:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.728 04:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72080' 00:07:34.728 04:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 72080 00:07:34.728 [2024-11-21 04:05:34.644913] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:34.728 04:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 72080 00:07:34.728 [2024-11-21 04:05:34.646546] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:35.306 04:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:35.306 00:07:35.306 real 0m4.128s 00:07:35.306 user 0m6.348s 00:07:35.306 sys 0m0.896s 00:07:35.306 04:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.306 04:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.306 ************************************ 00:07:35.306 END TEST raid_state_function_test 00:07:35.306 ************************************ 00:07:35.306 04:05:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:35.306 04:05:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:35.306 04:05:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.306 04:05:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:35.306 ************************************ 00:07:35.306 START TEST raid_state_function_test_sb 00:07:35.306 ************************************ 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:35.306 Process raid pid: 72322 00:07:35.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72322 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72322' 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72322 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72322 ']' 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.306 04:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.306 [2024-11-21 04:05:35.140952] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:35.306 [2024-11-21 04:05:35.141153] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.578 [2024-11-21 04:05:35.299304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.578 [2024-11-21 04:05:35.340909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.578 [2024-11-21 04:05:35.420332] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.578 [2024-11-21 04:05:35.420483] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.149 04:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.149 04:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:36.149 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:36.149 04:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.149 04:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.149 [2024-11-21 04:05:35.992857] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.149 [2024-11-21 04:05:35.992916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.149 [2024-11-21 04:05:35.992937] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.149 [2024-11-21 04:05:35.992950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.149 04:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.149 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:36.149 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.149 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.149 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:36.149 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.149 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.149 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.149 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.149 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.149 04:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.149 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.149 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.149 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.149 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.149 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.149 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.149 "name": "Existed_Raid", 00:07:36.149 "uuid": "94ba1185-f32b-403f-a3c0-a471c7507e53", 00:07:36.149 "strip_size_kb": 64, 00:07:36.149 "state": "configuring", 00:07:36.149 "raid_level": "raid0", 00:07:36.149 "superblock": true, 00:07:36.149 "num_base_bdevs": 2, 00:07:36.149 "num_base_bdevs_discovered": 0, 00:07:36.149 "num_base_bdevs_operational": 2, 00:07:36.149 "base_bdevs_list": [ 00:07:36.149 { 00:07:36.149 "name": "BaseBdev1", 00:07:36.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.149 "is_configured": false, 00:07:36.149 "data_offset": 0, 00:07:36.149 "data_size": 0 00:07:36.149 }, 00:07:36.149 { 00:07:36.149 "name": "BaseBdev2", 00:07:36.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.149 "is_configured": false, 00:07:36.149 "data_offset": 0, 00:07:36.149 "data_size": 0 00:07:36.149 } 00:07:36.149 ] 00:07:36.149 }' 00:07:36.149 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.149 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.720 [2024-11-21 04:05:36.428107] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:36.720 [2024-11-21 04:05:36.428213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.720 [2024-11-21 04:05:36.440123] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.720 [2024-11-21 04:05:36.440207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.720 [2024-11-21 04:05:36.440266] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.720 [2024-11-21 04:05:36.440309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.720 [2024-11-21 04:05:36.467193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.720 BaseBdev1 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.720 [ 00:07:36.720 { 00:07:36.720 "name": "BaseBdev1", 00:07:36.720 "aliases": [ 00:07:36.720 "4a7e8c5b-626b-49ca-a11b-8c770f900bd4" 00:07:36.720 ], 00:07:36.720 "product_name": "Malloc disk", 00:07:36.720 "block_size": 512, 00:07:36.720 "num_blocks": 65536, 00:07:36.720 "uuid": "4a7e8c5b-626b-49ca-a11b-8c770f900bd4", 00:07:36.720 "assigned_rate_limits": { 00:07:36.720 "rw_ios_per_sec": 0, 00:07:36.720 "rw_mbytes_per_sec": 0, 00:07:36.720 "r_mbytes_per_sec": 0, 00:07:36.720 "w_mbytes_per_sec": 0 00:07:36.720 }, 00:07:36.720 "claimed": true, 00:07:36.720 "claim_type": "exclusive_write", 00:07:36.720 "zoned": false, 00:07:36.720 "supported_io_types": { 00:07:36.720 "read": true, 00:07:36.720 "write": true, 00:07:36.720 "unmap": true, 00:07:36.720 "flush": true, 00:07:36.720 "reset": true, 00:07:36.720 "nvme_admin": false, 00:07:36.720 "nvme_io": false, 00:07:36.720 "nvme_io_md": false, 00:07:36.720 "write_zeroes": true, 00:07:36.720 "zcopy": true, 00:07:36.720 "get_zone_info": false, 00:07:36.720 "zone_management": false, 00:07:36.720 "zone_append": false, 00:07:36.720 "compare": false, 00:07:36.720 "compare_and_write": false, 00:07:36.720 "abort": true, 00:07:36.720 "seek_hole": false, 00:07:36.720 "seek_data": false, 00:07:36.720 "copy": true, 00:07:36.720 "nvme_iov_md": false 00:07:36.720 }, 00:07:36.720 "memory_domains": [ 00:07:36.720 { 00:07:36.720 "dma_device_id": "system", 00:07:36.720 "dma_device_type": 1 00:07:36.720 }, 00:07:36.720 { 00:07:36.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.720 "dma_device_type": 2 00:07:36.720 } 00:07:36.720 ], 00:07:36.720 "driver_specific": {} 00:07:36.720 } 00:07:36.720 ] 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.720 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:36.721 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.721 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.721 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.721 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.721 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.721 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.721 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.721 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.721 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.721 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.721 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.721 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.721 "name": "Existed_Raid", 00:07:36.721 "uuid": "179182d7-f83e-43df-b1c8-92b975212986", 00:07:36.721 "strip_size_kb": 64, 00:07:36.721 "state": "configuring", 00:07:36.721 "raid_level": "raid0", 00:07:36.721 "superblock": true, 00:07:36.721 "num_base_bdevs": 2, 00:07:36.721 "num_base_bdevs_discovered": 1, 00:07:36.721 "num_base_bdevs_operational": 2, 00:07:36.721 "base_bdevs_list": [ 00:07:36.721 { 00:07:36.721 "name": "BaseBdev1", 00:07:36.721 "uuid": "4a7e8c5b-626b-49ca-a11b-8c770f900bd4", 00:07:36.721 "is_configured": true, 00:07:36.721 "data_offset": 2048, 00:07:36.721 "data_size": 63488 00:07:36.721 }, 00:07:36.721 { 00:07:36.721 "name": "BaseBdev2", 00:07:36.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.721 "is_configured": false, 00:07:36.721 "data_offset": 0, 00:07:36.721 "data_size": 0 00:07:36.721 } 00:07:36.721 ] 00:07:36.721 }' 00:07:36.721 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.721 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.291 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:37.291 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.291 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.291 [2024-11-21 04:05:36.966398] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:37.291 [2024-11-21 04:05:36.966463] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:37.291 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.291 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:37.291 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.291 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.291 [2024-11-21 04:05:36.978404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.291 [2024-11-21 04:05:36.980632] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.291 [2024-11-21 04:05:36.980675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.291 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.291 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:37.291 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.291 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:37.292 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.292 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.292 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.292 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.292 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.292 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.292 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.292 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.292 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.292 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.292 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.292 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.292 04:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.292 04:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.292 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.292 "name": "Existed_Raid", 00:07:37.292 "uuid": "6ec78c01-def6-4203-a859-1111c90cf105", 00:07:37.292 "strip_size_kb": 64, 00:07:37.292 "state": "configuring", 00:07:37.292 "raid_level": "raid0", 00:07:37.292 "superblock": true, 00:07:37.292 "num_base_bdevs": 2, 00:07:37.292 "num_base_bdevs_discovered": 1, 00:07:37.292 "num_base_bdevs_operational": 2, 00:07:37.292 "base_bdevs_list": [ 00:07:37.292 { 00:07:37.292 "name": "BaseBdev1", 00:07:37.292 "uuid": "4a7e8c5b-626b-49ca-a11b-8c770f900bd4", 00:07:37.292 "is_configured": true, 00:07:37.292 "data_offset": 2048, 00:07:37.292 "data_size": 63488 00:07:37.292 }, 00:07:37.292 { 00:07:37.292 "name": "BaseBdev2", 00:07:37.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.292 "is_configured": false, 00:07:37.292 "data_offset": 0, 00:07:37.292 "data_size": 0 00:07:37.292 } 00:07:37.292 ] 00:07:37.292 }' 00:07:37.292 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.292 04:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.552 [2024-11-21 04:05:37.438365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:37.552 [2024-11-21 04:05:37.438746] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:37.552 [2024-11-21 04:05:37.438800] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:37.552 [2024-11-21 04:05:37.439169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:37.552 BaseBdev2 00:07:37.552 [2024-11-21 04:05:37.439391] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:37.552 [2024-11-21 04:05:37.439456] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:37.552 [2024-11-21 04:05:37.439641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.552 [ 00:07:37.552 { 00:07:37.552 "name": "BaseBdev2", 00:07:37.552 "aliases": [ 00:07:37.552 "5ac49ab9-f87c-40d0-8a07-b8621c5267f8" 00:07:37.552 ], 00:07:37.552 "product_name": "Malloc disk", 00:07:37.552 "block_size": 512, 00:07:37.552 "num_blocks": 65536, 00:07:37.552 "uuid": "5ac49ab9-f87c-40d0-8a07-b8621c5267f8", 00:07:37.552 "assigned_rate_limits": { 00:07:37.552 "rw_ios_per_sec": 0, 00:07:37.552 "rw_mbytes_per_sec": 0, 00:07:37.552 "r_mbytes_per_sec": 0, 00:07:37.552 "w_mbytes_per_sec": 0 00:07:37.552 }, 00:07:37.552 "claimed": true, 00:07:37.552 "claim_type": "exclusive_write", 00:07:37.552 "zoned": false, 00:07:37.552 "supported_io_types": { 00:07:37.552 "read": true, 00:07:37.552 "write": true, 00:07:37.552 "unmap": true, 00:07:37.552 "flush": true, 00:07:37.552 "reset": true, 00:07:37.552 "nvme_admin": false, 00:07:37.552 "nvme_io": false, 00:07:37.552 "nvme_io_md": false, 00:07:37.552 "write_zeroes": true, 00:07:37.552 "zcopy": true, 00:07:37.552 "get_zone_info": false, 00:07:37.552 "zone_management": false, 00:07:37.552 "zone_append": false, 00:07:37.552 "compare": false, 00:07:37.552 "compare_and_write": false, 00:07:37.552 "abort": true, 00:07:37.552 "seek_hole": false, 00:07:37.552 "seek_data": false, 00:07:37.552 "copy": true, 00:07:37.552 "nvme_iov_md": false 00:07:37.552 }, 00:07:37.552 "memory_domains": [ 00:07:37.552 { 00:07:37.552 "dma_device_id": "system", 00:07:37.552 "dma_device_type": 1 00:07:37.552 }, 00:07:37.552 { 00:07:37.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.552 "dma_device_type": 2 00:07:37.552 } 00:07:37.552 ], 00:07:37.552 "driver_specific": {} 00:07:37.552 } 00:07:37.552 ] 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.552 04:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.812 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.812 "name": "Existed_Raid", 00:07:37.812 "uuid": "6ec78c01-def6-4203-a859-1111c90cf105", 00:07:37.812 "strip_size_kb": 64, 00:07:37.812 "state": "online", 00:07:37.812 "raid_level": "raid0", 00:07:37.812 "superblock": true, 00:07:37.812 "num_base_bdevs": 2, 00:07:37.812 "num_base_bdevs_discovered": 2, 00:07:37.812 "num_base_bdevs_operational": 2, 00:07:37.812 "base_bdevs_list": [ 00:07:37.812 { 00:07:37.812 "name": "BaseBdev1", 00:07:37.812 "uuid": "4a7e8c5b-626b-49ca-a11b-8c770f900bd4", 00:07:37.812 "is_configured": true, 00:07:37.812 "data_offset": 2048, 00:07:37.812 "data_size": 63488 00:07:37.812 }, 00:07:37.812 { 00:07:37.812 "name": "BaseBdev2", 00:07:37.812 "uuid": "5ac49ab9-f87c-40d0-8a07-b8621c5267f8", 00:07:37.812 "is_configured": true, 00:07:37.812 "data_offset": 2048, 00:07:37.812 "data_size": 63488 00:07:37.812 } 00:07:37.812 ] 00:07:37.812 }' 00:07:37.812 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.812 04:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.072 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:38.072 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:38.072 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:38.072 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:38.072 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:38.072 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:38.072 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:38.072 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:38.072 04:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.072 04:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.072 [2024-11-21 04:05:37.961836] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.072 04:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.072 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:38.072 "name": "Existed_Raid", 00:07:38.072 "aliases": [ 00:07:38.072 "6ec78c01-def6-4203-a859-1111c90cf105" 00:07:38.072 ], 00:07:38.072 "product_name": "Raid Volume", 00:07:38.072 "block_size": 512, 00:07:38.072 "num_blocks": 126976, 00:07:38.072 "uuid": "6ec78c01-def6-4203-a859-1111c90cf105", 00:07:38.072 "assigned_rate_limits": { 00:07:38.072 "rw_ios_per_sec": 0, 00:07:38.072 "rw_mbytes_per_sec": 0, 00:07:38.072 "r_mbytes_per_sec": 0, 00:07:38.072 "w_mbytes_per_sec": 0 00:07:38.072 }, 00:07:38.072 "claimed": false, 00:07:38.072 "zoned": false, 00:07:38.072 "supported_io_types": { 00:07:38.072 "read": true, 00:07:38.072 "write": true, 00:07:38.072 "unmap": true, 00:07:38.072 "flush": true, 00:07:38.072 "reset": true, 00:07:38.072 "nvme_admin": false, 00:07:38.072 "nvme_io": false, 00:07:38.072 "nvme_io_md": false, 00:07:38.072 "write_zeroes": true, 00:07:38.072 "zcopy": false, 00:07:38.072 "get_zone_info": false, 00:07:38.072 "zone_management": false, 00:07:38.072 "zone_append": false, 00:07:38.072 "compare": false, 00:07:38.072 "compare_and_write": false, 00:07:38.072 "abort": false, 00:07:38.072 "seek_hole": false, 00:07:38.072 "seek_data": false, 00:07:38.072 "copy": false, 00:07:38.072 "nvme_iov_md": false 00:07:38.072 }, 00:07:38.072 "memory_domains": [ 00:07:38.072 { 00:07:38.072 "dma_device_id": "system", 00:07:38.072 "dma_device_type": 1 00:07:38.072 }, 00:07:38.072 { 00:07:38.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.072 "dma_device_type": 2 00:07:38.072 }, 00:07:38.072 { 00:07:38.072 "dma_device_id": "system", 00:07:38.072 "dma_device_type": 1 00:07:38.072 }, 00:07:38.072 { 00:07:38.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.072 "dma_device_type": 2 00:07:38.072 } 00:07:38.072 ], 00:07:38.072 "driver_specific": { 00:07:38.072 "raid": { 00:07:38.072 "uuid": "6ec78c01-def6-4203-a859-1111c90cf105", 00:07:38.072 "strip_size_kb": 64, 00:07:38.072 "state": "online", 00:07:38.072 "raid_level": "raid0", 00:07:38.072 "superblock": true, 00:07:38.072 "num_base_bdevs": 2, 00:07:38.072 "num_base_bdevs_discovered": 2, 00:07:38.072 "num_base_bdevs_operational": 2, 00:07:38.072 "base_bdevs_list": [ 00:07:38.072 { 00:07:38.072 "name": "BaseBdev1", 00:07:38.072 "uuid": "4a7e8c5b-626b-49ca-a11b-8c770f900bd4", 00:07:38.072 "is_configured": true, 00:07:38.072 "data_offset": 2048, 00:07:38.072 "data_size": 63488 00:07:38.072 }, 00:07:38.072 { 00:07:38.072 "name": "BaseBdev2", 00:07:38.072 "uuid": "5ac49ab9-f87c-40d0-8a07-b8621c5267f8", 00:07:38.072 "is_configured": true, 00:07:38.072 "data_offset": 2048, 00:07:38.072 "data_size": 63488 00:07:38.072 } 00:07:38.072 ] 00:07:38.072 } 00:07:38.072 } 00:07:38.072 }' 00:07:38.072 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:38.072 04:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:38.072 BaseBdev2' 00:07:38.072 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.332 [2024-11-21 04:05:38.153333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:38.332 [2024-11-21 04:05:38.153412] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:38.332 [2024-11-21 04:05:38.153491] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.332 "name": "Existed_Raid", 00:07:38.332 "uuid": "6ec78c01-def6-4203-a859-1111c90cf105", 00:07:38.332 "strip_size_kb": 64, 00:07:38.332 "state": "offline", 00:07:38.332 "raid_level": "raid0", 00:07:38.332 "superblock": true, 00:07:38.332 "num_base_bdevs": 2, 00:07:38.332 "num_base_bdevs_discovered": 1, 00:07:38.332 "num_base_bdevs_operational": 1, 00:07:38.332 "base_bdevs_list": [ 00:07:38.332 { 00:07:38.332 "name": null, 00:07:38.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.332 "is_configured": false, 00:07:38.332 "data_offset": 0, 00:07:38.332 "data_size": 63488 00:07:38.332 }, 00:07:38.332 { 00:07:38.332 "name": "BaseBdev2", 00:07:38.332 "uuid": "5ac49ab9-f87c-40d0-8a07-b8621c5267f8", 00:07:38.332 "is_configured": true, 00:07:38.332 "data_offset": 2048, 00:07:38.332 "data_size": 63488 00:07:38.332 } 00:07:38.332 ] 00:07:38.332 }' 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.332 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.903 [2024-11-21 04:05:38.701329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:38.903 [2024-11-21 04:05:38.701400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72322 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72322 ']' 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72322 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72322 00:07:38.903 killing process with pid 72322 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72322' 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72322 00:07:38.903 [2024-11-21 04:05:38.816000] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:38.903 04:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72322 00:07:38.903 [2024-11-21 04:05:38.817619] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:39.473 04:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:39.473 00:07:39.473 real 0m4.085s 00:07:39.473 user 0m6.334s 00:07:39.473 sys 0m0.837s 00:07:39.473 04:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.473 04:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.473 ************************************ 00:07:39.473 END TEST raid_state_function_test_sb 00:07:39.473 ************************************ 00:07:39.473 04:05:39 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:39.473 04:05:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:39.473 04:05:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.473 04:05:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:39.473 ************************************ 00:07:39.473 START TEST raid_superblock_test 00:07:39.473 ************************************ 00:07:39.473 04:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:39.473 04:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:39.473 04:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:39.473 04:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:39.473 04:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:39.473 04:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:39.473 04:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:39.473 04:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:39.473 04:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:39.473 04:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:39.473 04:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:39.473 04:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:39.473 04:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:39.473 04:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:39.473 04:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:39.473 04:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:39.473 04:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:39.473 04:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72563 00:07:39.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.473 04:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72563 00:07:39.473 04:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:39.473 04:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72563 ']' 00:07:39.473 04:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.473 04:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.473 04:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.473 04:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.473 04:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.473 [2024-11-21 04:05:39.294038] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:39.473 [2024-11-21 04:05:39.294180] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72563 ] 00:07:39.473 [2024-11-21 04:05:39.426111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.733 [2024-11-21 04:05:39.465892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.733 [2024-11-21 04:05:39.541789] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.733 [2024-11-21 04:05:39.541834] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.303 malloc1 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.303 [2024-11-21 04:05:40.208319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:40.303 [2024-11-21 04:05:40.208431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.303 [2024-11-21 04:05:40.208472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:40.303 [2024-11-21 04:05:40.208509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.303 [2024-11-21 04:05:40.211104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.303 [2024-11-21 04:05:40.211198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:40.303 pt1 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.303 malloc2 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.303 [2024-11-21 04:05:40.243379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:40.303 [2024-11-21 04:05:40.243493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.303 [2024-11-21 04:05:40.243515] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:40.303 [2024-11-21 04:05:40.243527] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.303 [2024-11-21 04:05:40.245941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.303 [2024-11-21 04:05:40.245977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:40.303 pt2 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.303 [2024-11-21 04:05:40.255401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:40.303 [2024-11-21 04:05:40.257570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:40.303 [2024-11-21 04:05:40.257716] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:40.303 [2024-11-21 04:05:40.257730] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:40.303 [2024-11-21 04:05:40.258011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:40.303 [2024-11-21 04:05:40.258152] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:40.303 [2024-11-21 04:05:40.258163] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:07:40.303 [2024-11-21 04:05:40.258309] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:40.303 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.304 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.563 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.563 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.563 "name": "raid_bdev1", 00:07:40.563 "uuid": "1547d60f-41b4-48c0-a6e9-fa8d4c1d44d2", 00:07:40.563 "strip_size_kb": 64, 00:07:40.563 "state": "online", 00:07:40.563 "raid_level": "raid0", 00:07:40.563 "superblock": true, 00:07:40.563 "num_base_bdevs": 2, 00:07:40.563 "num_base_bdevs_discovered": 2, 00:07:40.563 "num_base_bdevs_operational": 2, 00:07:40.564 "base_bdevs_list": [ 00:07:40.564 { 00:07:40.564 "name": "pt1", 00:07:40.564 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:40.564 "is_configured": true, 00:07:40.564 "data_offset": 2048, 00:07:40.564 "data_size": 63488 00:07:40.564 }, 00:07:40.564 { 00:07:40.564 "name": "pt2", 00:07:40.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:40.564 "is_configured": true, 00:07:40.564 "data_offset": 2048, 00:07:40.564 "data_size": 63488 00:07:40.564 } 00:07:40.564 ] 00:07:40.564 }' 00:07:40.564 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.564 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.823 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:40.823 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:40.823 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:40.823 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:40.823 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:40.823 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:40.823 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:40.823 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:40.823 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.823 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.823 [2024-11-21 04:05:40.738962] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.823 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.823 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:40.823 "name": "raid_bdev1", 00:07:40.823 "aliases": [ 00:07:40.823 "1547d60f-41b4-48c0-a6e9-fa8d4c1d44d2" 00:07:40.823 ], 00:07:40.823 "product_name": "Raid Volume", 00:07:40.823 "block_size": 512, 00:07:40.823 "num_blocks": 126976, 00:07:40.823 "uuid": "1547d60f-41b4-48c0-a6e9-fa8d4c1d44d2", 00:07:40.823 "assigned_rate_limits": { 00:07:40.823 "rw_ios_per_sec": 0, 00:07:40.823 "rw_mbytes_per_sec": 0, 00:07:40.823 "r_mbytes_per_sec": 0, 00:07:40.823 "w_mbytes_per_sec": 0 00:07:40.823 }, 00:07:40.823 "claimed": false, 00:07:40.823 "zoned": false, 00:07:40.823 "supported_io_types": { 00:07:40.823 "read": true, 00:07:40.823 "write": true, 00:07:40.823 "unmap": true, 00:07:40.823 "flush": true, 00:07:40.823 "reset": true, 00:07:40.823 "nvme_admin": false, 00:07:40.823 "nvme_io": false, 00:07:40.823 "nvme_io_md": false, 00:07:40.823 "write_zeroes": true, 00:07:40.823 "zcopy": false, 00:07:40.823 "get_zone_info": false, 00:07:40.823 "zone_management": false, 00:07:40.823 "zone_append": false, 00:07:40.823 "compare": false, 00:07:40.823 "compare_and_write": false, 00:07:40.823 "abort": false, 00:07:40.823 "seek_hole": false, 00:07:40.823 "seek_data": false, 00:07:40.823 "copy": false, 00:07:40.823 "nvme_iov_md": false 00:07:40.823 }, 00:07:40.823 "memory_domains": [ 00:07:40.823 { 00:07:40.823 "dma_device_id": "system", 00:07:40.823 "dma_device_type": 1 00:07:40.823 }, 00:07:40.823 { 00:07:40.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.823 "dma_device_type": 2 00:07:40.823 }, 00:07:40.823 { 00:07:40.823 "dma_device_id": "system", 00:07:40.823 "dma_device_type": 1 00:07:40.823 }, 00:07:40.823 { 00:07:40.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.823 "dma_device_type": 2 00:07:40.823 } 00:07:40.823 ], 00:07:40.823 "driver_specific": { 00:07:40.823 "raid": { 00:07:40.824 "uuid": "1547d60f-41b4-48c0-a6e9-fa8d4c1d44d2", 00:07:40.824 "strip_size_kb": 64, 00:07:40.824 "state": "online", 00:07:40.824 "raid_level": "raid0", 00:07:40.824 "superblock": true, 00:07:40.824 "num_base_bdevs": 2, 00:07:40.824 "num_base_bdevs_discovered": 2, 00:07:40.824 "num_base_bdevs_operational": 2, 00:07:40.824 "base_bdevs_list": [ 00:07:40.824 { 00:07:40.824 "name": "pt1", 00:07:40.824 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:40.824 "is_configured": true, 00:07:40.824 "data_offset": 2048, 00:07:40.824 "data_size": 63488 00:07:40.824 }, 00:07:40.824 { 00:07:40.824 "name": "pt2", 00:07:40.824 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:40.824 "is_configured": true, 00:07:40.824 "data_offset": 2048, 00:07:40.824 "data_size": 63488 00:07:40.824 } 00:07:40.824 ] 00:07:40.824 } 00:07:40.824 } 00:07:40.824 }' 00:07:40.824 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:41.084 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:41.084 pt2' 00:07:41.084 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:41.084 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:41.084 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:41.084 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:41.084 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.084 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.084 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:41.084 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.084 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:41.084 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:41.084 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:41.084 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:41.084 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:41.084 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.084 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.084 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.084 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:41.084 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:41.084 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:41.084 04:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:41.084 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.084 04:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.084 [2024-11-21 04:05:40.990380] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:41.084 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.084 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1547d60f-41b4-48c0-a6e9-fa8d4c1d44d2 00:07:41.084 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1547d60f-41b4-48c0-a6e9-fa8d4c1d44d2 ']' 00:07:41.084 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:41.084 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.084 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.084 [2024-11-21 04:05:41.034025] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:41.084 [2024-11-21 04:05:41.034103] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:41.084 [2024-11-21 04:05:41.034239] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.084 [2024-11-21 04:05:41.034343] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:41.084 [2024-11-21 04:05:41.034395] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:41.084 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.084 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.084 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.084 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.084 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:41.084 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.345 [2024-11-21 04:05:41.177849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:41.345 [2024-11-21 04:05:41.180212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:41.345 [2024-11-21 04:05:41.180355] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:41.345 [2024-11-21 04:05:41.180502] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:41.345 [2024-11-21 04:05:41.180576] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:41.345 [2024-11-21 04:05:41.180619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:41.345 request: 00:07:41.345 { 00:07:41.345 "name": "raid_bdev1", 00:07:41.345 "raid_level": "raid0", 00:07:41.345 "base_bdevs": [ 00:07:41.345 "malloc1", 00:07:41.345 "malloc2" 00:07:41.345 ], 00:07:41.345 "strip_size_kb": 64, 00:07:41.345 "superblock": false, 00:07:41.345 "method": "bdev_raid_create", 00:07:41.345 "req_id": 1 00:07:41.345 } 00:07:41.345 Got JSON-RPC error response 00:07:41.345 response: 00:07:41.345 { 00:07:41.345 "code": -17, 00:07:41.345 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:41.345 } 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.345 [2024-11-21 04:05:41.233717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:41.345 [2024-11-21 04:05:41.233824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:41.345 [2024-11-21 04:05:41.233862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:41.345 [2024-11-21 04:05:41.233890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:41.345 [2024-11-21 04:05:41.236502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:41.345 [2024-11-21 04:05:41.236579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:41.345 [2024-11-21 04:05:41.236683] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:41.345 [2024-11-21 04:05:41.236764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:41.345 pt1 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.345 "name": "raid_bdev1", 00:07:41.345 "uuid": "1547d60f-41b4-48c0-a6e9-fa8d4c1d44d2", 00:07:41.345 "strip_size_kb": 64, 00:07:41.345 "state": "configuring", 00:07:41.345 "raid_level": "raid0", 00:07:41.345 "superblock": true, 00:07:41.345 "num_base_bdevs": 2, 00:07:41.345 "num_base_bdevs_discovered": 1, 00:07:41.345 "num_base_bdevs_operational": 2, 00:07:41.345 "base_bdevs_list": [ 00:07:41.345 { 00:07:41.345 "name": "pt1", 00:07:41.345 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:41.345 "is_configured": true, 00:07:41.345 "data_offset": 2048, 00:07:41.345 "data_size": 63488 00:07:41.345 }, 00:07:41.345 { 00:07:41.345 "name": null, 00:07:41.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:41.345 "is_configured": false, 00:07:41.345 "data_offset": 2048, 00:07:41.345 "data_size": 63488 00:07:41.345 } 00:07:41.345 ] 00:07:41.345 }' 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.345 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.915 [2024-11-21 04:05:41.676983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:41.915 [2024-11-21 04:05:41.677133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:41.915 [2024-11-21 04:05:41.677165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:41.915 [2024-11-21 04:05:41.677176] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:41.915 [2024-11-21 04:05:41.677724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:41.915 [2024-11-21 04:05:41.677746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:41.915 [2024-11-21 04:05:41.677846] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:41.915 [2024-11-21 04:05:41.677872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:41.915 [2024-11-21 04:05:41.677980] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:41.915 [2024-11-21 04:05:41.677989] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:41.915 [2024-11-21 04:05:41.678310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:41.915 [2024-11-21 04:05:41.678453] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:41.915 [2024-11-21 04:05:41.678470] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:41.915 [2024-11-21 04:05:41.678589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.915 pt2 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.915 "name": "raid_bdev1", 00:07:41.915 "uuid": "1547d60f-41b4-48c0-a6e9-fa8d4c1d44d2", 00:07:41.915 "strip_size_kb": 64, 00:07:41.915 "state": "online", 00:07:41.915 "raid_level": "raid0", 00:07:41.915 "superblock": true, 00:07:41.915 "num_base_bdevs": 2, 00:07:41.915 "num_base_bdevs_discovered": 2, 00:07:41.915 "num_base_bdevs_operational": 2, 00:07:41.915 "base_bdevs_list": [ 00:07:41.915 { 00:07:41.915 "name": "pt1", 00:07:41.915 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:41.915 "is_configured": true, 00:07:41.915 "data_offset": 2048, 00:07:41.915 "data_size": 63488 00:07:41.915 }, 00:07:41.915 { 00:07:41.915 "name": "pt2", 00:07:41.915 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:41.915 "is_configured": true, 00:07:41.915 "data_offset": 2048, 00:07:41.915 "data_size": 63488 00:07:41.915 } 00:07:41.915 ] 00:07:41.915 }' 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.915 04:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.174 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:42.174 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:42.174 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:42.175 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:42.175 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:42.175 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:42.175 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:42.175 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:42.175 04:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.175 04:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.175 [2024-11-21 04:05:42.124597] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:42.175 04:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.434 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:42.434 "name": "raid_bdev1", 00:07:42.434 "aliases": [ 00:07:42.434 "1547d60f-41b4-48c0-a6e9-fa8d4c1d44d2" 00:07:42.434 ], 00:07:42.434 "product_name": "Raid Volume", 00:07:42.434 "block_size": 512, 00:07:42.434 "num_blocks": 126976, 00:07:42.434 "uuid": "1547d60f-41b4-48c0-a6e9-fa8d4c1d44d2", 00:07:42.434 "assigned_rate_limits": { 00:07:42.434 "rw_ios_per_sec": 0, 00:07:42.434 "rw_mbytes_per_sec": 0, 00:07:42.434 "r_mbytes_per_sec": 0, 00:07:42.434 "w_mbytes_per_sec": 0 00:07:42.434 }, 00:07:42.434 "claimed": false, 00:07:42.434 "zoned": false, 00:07:42.434 "supported_io_types": { 00:07:42.434 "read": true, 00:07:42.434 "write": true, 00:07:42.434 "unmap": true, 00:07:42.434 "flush": true, 00:07:42.434 "reset": true, 00:07:42.434 "nvme_admin": false, 00:07:42.434 "nvme_io": false, 00:07:42.434 "nvme_io_md": false, 00:07:42.434 "write_zeroes": true, 00:07:42.434 "zcopy": false, 00:07:42.434 "get_zone_info": false, 00:07:42.434 "zone_management": false, 00:07:42.434 "zone_append": false, 00:07:42.434 "compare": false, 00:07:42.434 "compare_and_write": false, 00:07:42.434 "abort": false, 00:07:42.434 "seek_hole": false, 00:07:42.434 "seek_data": false, 00:07:42.434 "copy": false, 00:07:42.434 "nvme_iov_md": false 00:07:42.434 }, 00:07:42.434 "memory_domains": [ 00:07:42.435 { 00:07:42.435 "dma_device_id": "system", 00:07:42.435 "dma_device_type": 1 00:07:42.435 }, 00:07:42.435 { 00:07:42.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.435 "dma_device_type": 2 00:07:42.435 }, 00:07:42.435 { 00:07:42.435 "dma_device_id": "system", 00:07:42.435 "dma_device_type": 1 00:07:42.435 }, 00:07:42.435 { 00:07:42.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.435 "dma_device_type": 2 00:07:42.435 } 00:07:42.435 ], 00:07:42.435 "driver_specific": { 00:07:42.435 "raid": { 00:07:42.435 "uuid": "1547d60f-41b4-48c0-a6e9-fa8d4c1d44d2", 00:07:42.435 "strip_size_kb": 64, 00:07:42.435 "state": "online", 00:07:42.435 "raid_level": "raid0", 00:07:42.435 "superblock": true, 00:07:42.435 "num_base_bdevs": 2, 00:07:42.435 "num_base_bdevs_discovered": 2, 00:07:42.435 "num_base_bdevs_operational": 2, 00:07:42.435 "base_bdevs_list": [ 00:07:42.435 { 00:07:42.435 "name": "pt1", 00:07:42.435 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:42.435 "is_configured": true, 00:07:42.435 "data_offset": 2048, 00:07:42.435 "data_size": 63488 00:07:42.435 }, 00:07:42.435 { 00:07:42.435 "name": "pt2", 00:07:42.435 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:42.435 "is_configured": true, 00:07:42.435 "data_offset": 2048, 00:07:42.435 "data_size": 63488 00:07:42.435 } 00:07:42.435 ] 00:07:42.435 } 00:07:42.435 } 00:07:42.435 }' 00:07:42.435 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:42.435 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:42.435 pt2' 00:07:42.435 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.435 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:42.435 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:42.435 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:42.435 04:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.435 04:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.435 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.435 04:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.435 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:42.435 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:42.435 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:42.435 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.435 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:42.435 04:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.435 04:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.435 04:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.435 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:42.435 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:42.435 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:42.435 04:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.435 04:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.435 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:42.435 [2024-11-21 04:05:42.372333] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:42.435 04:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.695 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1547d60f-41b4-48c0-a6e9-fa8d4c1d44d2 '!=' 1547d60f-41b4-48c0-a6e9-fa8d4c1d44d2 ']' 00:07:42.695 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:42.695 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:42.695 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:42.695 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72563 00:07:42.695 04:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72563 ']' 00:07:42.695 04:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72563 00:07:42.695 04:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:42.695 04:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.695 04:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72563 00:07:42.695 killing process with pid 72563 00:07:42.695 04:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:42.695 04:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:42.695 04:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72563' 00:07:42.695 04:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72563 00:07:42.695 [2024-11-21 04:05:42.457986] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:42.695 [2024-11-21 04:05:42.458086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:42.695 04:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72563 00:07:42.695 [2024-11-21 04:05:42.458144] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:42.695 [2024-11-21 04:05:42.458153] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:42.695 [2024-11-21 04:05:42.500734] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:42.956 04:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:42.956 00:07:42.956 real 0m3.615s 00:07:42.956 user 0m5.467s 00:07:42.956 sys 0m0.822s 00:07:42.956 04:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.956 04:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.956 ************************************ 00:07:42.956 END TEST raid_superblock_test 00:07:42.956 ************************************ 00:07:42.956 04:05:42 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:42.956 04:05:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:42.956 04:05:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.956 04:05:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:42.956 ************************************ 00:07:42.956 START TEST raid_read_error_test 00:07:42.956 ************************************ 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.buexIXjLMW 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72758 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72758 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72758 ']' 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.956 04:05:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.216 [2024-11-21 04:05:43.001871] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:43.216 [2024-11-21 04:05:43.002103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72758 ] 00:07:43.216 [2024-11-21 04:05:43.136560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.216 [2024-11-21 04:05:43.176483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.487 [2024-11-21 04:05:43.254822] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.487 [2024-11-21 04:05:43.254855] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.073 BaseBdev1_malloc 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.073 true 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.073 [2024-11-21 04:05:43.873523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:44.073 [2024-11-21 04:05:43.873584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.073 [2024-11-21 04:05:43.873622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:44.073 [2024-11-21 04:05:43.873638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.073 [2024-11-21 04:05:43.876135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.073 [2024-11-21 04:05:43.876282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:44.073 BaseBdev1 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.073 BaseBdev2_malloc 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.073 true 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.073 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.073 [2024-11-21 04:05:43.920095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:44.073 [2024-11-21 04:05:43.920195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.073 [2024-11-21 04:05:43.920253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:44.073 [2024-11-21 04:05:43.920275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.073 [2024-11-21 04:05:43.922910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.074 [2024-11-21 04:05:43.922953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:44.074 BaseBdev2 00:07:44.074 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.074 04:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:44.074 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.074 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.074 [2024-11-21 04:05:43.932141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:44.074 [2024-11-21 04:05:43.934467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:44.074 [2024-11-21 04:05:43.934774] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:44.074 [2024-11-21 04:05:43.934795] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:44.074 [2024-11-21 04:05:43.935099] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:07:44.074 [2024-11-21 04:05:43.935293] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:44.074 [2024-11-21 04:05:43.935309] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:44.074 [2024-11-21 04:05:43.935453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.074 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.074 04:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:44.074 04:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:44.074 04:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:44.074 04:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:44.074 04:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.074 04:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.074 04:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.074 04:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.074 04:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.074 04:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.074 04:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.074 04:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:44.074 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.074 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.074 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.074 04:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.074 "name": "raid_bdev1", 00:07:44.074 "uuid": "201c3843-2f96-49cc-9cad-d7f1ed74a7f9", 00:07:44.074 "strip_size_kb": 64, 00:07:44.074 "state": "online", 00:07:44.074 "raid_level": "raid0", 00:07:44.074 "superblock": true, 00:07:44.074 "num_base_bdevs": 2, 00:07:44.074 "num_base_bdevs_discovered": 2, 00:07:44.074 "num_base_bdevs_operational": 2, 00:07:44.074 "base_bdevs_list": [ 00:07:44.074 { 00:07:44.074 "name": "BaseBdev1", 00:07:44.074 "uuid": "36fdc18e-bf24-536f-8e4d-e50a5d4a13a0", 00:07:44.074 "is_configured": true, 00:07:44.074 "data_offset": 2048, 00:07:44.074 "data_size": 63488 00:07:44.074 }, 00:07:44.074 { 00:07:44.074 "name": "BaseBdev2", 00:07:44.074 "uuid": "cf8cde5b-e75c-576c-9a78-b5c03e9ca9ca", 00:07:44.074 "is_configured": true, 00:07:44.074 "data_offset": 2048, 00:07:44.074 "data_size": 63488 00:07:44.074 } 00:07:44.074 ] 00:07:44.074 }' 00:07:44.074 04:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.074 04:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.645 04:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:44.645 04:05:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:44.645 [2024-11-21 04:05:44.455846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:07:45.584 04:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:45.584 04:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.585 04:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.585 04:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.585 04:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:45.585 04:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:45.585 04:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:45.585 04:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:45.585 04:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:45.585 04:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:45.585 04:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:45.585 04:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.585 04:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.585 04:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.585 04:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.585 04:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.585 04:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.585 04:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:45.585 04:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.585 04:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.585 04:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.585 04:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.585 04:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.585 "name": "raid_bdev1", 00:07:45.585 "uuid": "201c3843-2f96-49cc-9cad-d7f1ed74a7f9", 00:07:45.585 "strip_size_kb": 64, 00:07:45.585 "state": "online", 00:07:45.585 "raid_level": "raid0", 00:07:45.585 "superblock": true, 00:07:45.585 "num_base_bdevs": 2, 00:07:45.585 "num_base_bdevs_discovered": 2, 00:07:45.585 "num_base_bdevs_operational": 2, 00:07:45.585 "base_bdevs_list": [ 00:07:45.585 { 00:07:45.585 "name": "BaseBdev1", 00:07:45.585 "uuid": "36fdc18e-bf24-536f-8e4d-e50a5d4a13a0", 00:07:45.585 "is_configured": true, 00:07:45.585 "data_offset": 2048, 00:07:45.585 "data_size": 63488 00:07:45.585 }, 00:07:45.585 { 00:07:45.585 "name": "BaseBdev2", 00:07:45.585 "uuid": "cf8cde5b-e75c-576c-9a78-b5c03e9ca9ca", 00:07:45.585 "is_configured": true, 00:07:45.585 "data_offset": 2048, 00:07:45.585 "data_size": 63488 00:07:45.585 } 00:07:45.585 ] 00:07:45.585 }' 00:07:45.585 04:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.585 04:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.155 04:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:46.155 04:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.155 04:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.155 [2024-11-21 04:05:45.824154] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:46.156 [2024-11-21 04:05:45.824192] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:46.156 [2024-11-21 04:05:45.826795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.156 [2024-11-21 04:05:45.826889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.156 [2024-11-21 04:05:45.826948] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:46.156 [2024-11-21 04:05:45.827000] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:46.156 { 00:07:46.156 "results": [ 00:07:46.156 { 00:07:46.156 "job": "raid_bdev1", 00:07:46.156 "core_mask": "0x1", 00:07:46.156 "workload": "randrw", 00:07:46.156 "percentage": 50, 00:07:46.156 "status": "finished", 00:07:46.156 "queue_depth": 1, 00:07:46.156 "io_size": 131072, 00:07:46.156 "runtime": 1.368609, 00:07:46.156 "iops": 15114.61637326658, 00:07:46.156 "mibps": 1889.3270466583224, 00:07:46.156 "io_failed": 1, 00:07:46.156 "io_timeout": 0, 00:07:46.156 "avg_latency_us": 92.6908852109092, 00:07:46.156 "min_latency_us": 24.817467248908297, 00:07:46.156 "max_latency_us": 1416.6078602620087 00:07:46.156 } 00:07:46.156 ], 00:07:46.156 "core_count": 1 00:07:46.156 } 00:07:46.156 04:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.156 04:05:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72758 00:07:46.156 04:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72758 ']' 00:07:46.156 04:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72758 00:07:46.156 04:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:46.156 04:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.156 04:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72758 00:07:46.156 04:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.156 04:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.156 killing process with pid 72758 00:07:46.156 04:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72758' 00:07:46.156 04:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72758 00:07:46.156 [2024-11-21 04:05:45.872740] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:46.156 04:05:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72758 00:07:46.156 [2024-11-21 04:05:45.903172] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:46.416 04:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.buexIXjLMW 00:07:46.416 04:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:46.416 04:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:46.416 04:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:46.416 04:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:46.416 04:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:46.416 04:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:46.416 ************************************ 00:07:46.416 END TEST raid_read_error_test 00:07:46.416 ************************************ 00:07:46.416 04:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:46.416 00:07:46.416 real 0m3.346s 00:07:46.416 user 0m4.134s 00:07:46.417 sys 0m0.584s 00:07:46.417 04:05:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.417 04:05:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.417 04:05:46 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:46.417 04:05:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:46.417 04:05:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.417 04:05:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:46.417 ************************************ 00:07:46.417 START TEST raid_write_error_test 00:07:46.417 ************************************ 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.KP9sRZYJXW 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72893 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72893 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72893 ']' 00:07:46.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.417 04:05:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.677 [2024-11-21 04:05:46.414321] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:46.677 [2024-11-21 04:05:46.414431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72893 ] 00:07:46.677 [2024-11-21 04:05:46.567048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.677 [2024-11-21 04:05:46.608375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.937 [2024-11-21 04:05:46.684977] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.937 [2024-11-21 04:05:46.685012] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.506 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.506 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:47.506 04:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:47.506 04:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:47.506 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.506 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.506 BaseBdev1_malloc 00:07:47.506 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.506 04:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:47.506 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.506 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.506 true 00:07:47.506 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.506 04:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:47.506 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.506 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.506 [2024-11-21 04:05:47.292436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:47.506 [2024-11-21 04:05:47.292494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.506 [2024-11-21 04:05:47.292514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:47.506 [2024-11-21 04:05:47.292523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.507 [2024-11-21 04:05:47.294961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.507 [2024-11-21 04:05:47.295001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:47.507 BaseBdev1 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.507 BaseBdev2_malloc 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.507 true 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.507 [2024-11-21 04:05:47.339359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:47.507 [2024-11-21 04:05:47.339413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.507 [2024-11-21 04:05:47.339433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:47.507 [2024-11-21 04:05:47.339451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.507 [2024-11-21 04:05:47.341924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.507 [2024-11-21 04:05:47.342019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:47.507 BaseBdev2 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.507 [2024-11-21 04:05:47.351408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:47.507 [2024-11-21 04:05:47.353660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:47.507 [2024-11-21 04:05:47.353858] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:47.507 [2024-11-21 04:05:47.353871] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:47.507 [2024-11-21 04:05:47.354131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:07:47.507 [2024-11-21 04:05:47.354296] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:47.507 [2024-11-21 04:05:47.354311] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:47.507 [2024-11-21 04:05:47.354444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.507 "name": "raid_bdev1", 00:07:47.507 "uuid": "6b356346-a839-41be-b05d-dd16c73a0483", 00:07:47.507 "strip_size_kb": 64, 00:07:47.507 "state": "online", 00:07:47.507 "raid_level": "raid0", 00:07:47.507 "superblock": true, 00:07:47.507 "num_base_bdevs": 2, 00:07:47.507 "num_base_bdevs_discovered": 2, 00:07:47.507 "num_base_bdevs_operational": 2, 00:07:47.507 "base_bdevs_list": [ 00:07:47.507 { 00:07:47.507 "name": "BaseBdev1", 00:07:47.507 "uuid": "89c01955-d4f8-57de-b7ce-67bb176901ef", 00:07:47.507 "is_configured": true, 00:07:47.507 "data_offset": 2048, 00:07:47.507 "data_size": 63488 00:07:47.507 }, 00:07:47.507 { 00:07:47.507 "name": "BaseBdev2", 00:07:47.507 "uuid": "ab74d9d5-317b-57d3-949f-ef2dd06dd4de", 00:07:47.507 "is_configured": true, 00:07:47.507 "data_offset": 2048, 00:07:47.507 "data_size": 63488 00:07:47.507 } 00:07:47.507 ] 00:07:47.507 }' 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.507 04:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.076 04:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:48.076 04:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:48.076 [2024-11-21 04:05:47.866996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:07:49.016 04:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:49.016 04:05:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.016 04:05:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.016 04:05:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.016 04:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:49.016 04:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:49.016 04:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:49.016 04:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:49.016 04:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:49.016 04:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:49.016 04:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:49.016 04:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.016 04:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.016 04:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.016 04:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.016 04:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.016 04:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.016 04:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.016 04:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:49.016 04:05:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.016 04:05:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.016 04:05:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.016 04:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.016 "name": "raid_bdev1", 00:07:49.016 "uuid": "6b356346-a839-41be-b05d-dd16c73a0483", 00:07:49.016 "strip_size_kb": 64, 00:07:49.016 "state": "online", 00:07:49.016 "raid_level": "raid0", 00:07:49.016 "superblock": true, 00:07:49.016 "num_base_bdevs": 2, 00:07:49.016 "num_base_bdevs_discovered": 2, 00:07:49.016 "num_base_bdevs_operational": 2, 00:07:49.016 "base_bdevs_list": [ 00:07:49.016 { 00:07:49.016 "name": "BaseBdev1", 00:07:49.016 "uuid": "89c01955-d4f8-57de-b7ce-67bb176901ef", 00:07:49.016 "is_configured": true, 00:07:49.016 "data_offset": 2048, 00:07:49.016 "data_size": 63488 00:07:49.016 }, 00:07:49.016 { 00:07:49.016 "name": "BaseBdev2", 00:07:49.016 "uuid": "ab74d9d5-317b-57d3-949f-ef2dd06dd4de", 00:07:49.016 "is_configured": true, 00:07:49.016 "data_offset": 2048, 00:07:49.016 "data_size": 63488 00:07:49.016 } 00:07:49.016 ] 00:07:49.016 }' 00:07:49.016 04:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.016 04:05:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.276 04:05:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:49.276 04:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.276 04:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.276 [2024-11-21 04:05:49.227510] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:49.276 [2024-11-21 04:05:49.227546] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:49.276 [2024-11-21 04:05:49.230092] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.276 [2024-11-21 04:05:49.230146] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.276 [2024-11-21 04:05:49.230186] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:49.276 [2024-11-21 04:05:49.230196] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:49.276 { 00:07:49.276 "results": [ 00:07:49.276 { 00:07:49.276 "job": "raid_bdev1", 00:07:49.276 "core_mask": "0x1", 00:07:49.276 "workload": "randrw", 00:07:49.276 "percentage": 50, 00:07:49.276 "status": "finished", 00:07:49.276 "queue_depth": 1, 00:07:49.276 "io_size": 131072, 00:07:49.276 "runtime": 1.360996, 00:07:49.276 "iops": 15011.06542561477, 00:07:49.276 "mibps": 1876.3831782018462, 00:07:49.276 "io_failed": 1, 00:07:49.276 "io_timeout": 0, 00:07:49.276 "avg_latency_us": 93.15292204093488, 00:07:49.276 "min_latency_us": 24.929257641921396, 00:07:49.276 "max_latency_us": 1445.2262008733624 00:07:49.276 } 00:07:49.276 ], 00:07:49.276 "core_count": 1 00:07:49.276 } 00:07:49.276 04:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.276 04:05:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72893 00:07:49.276 04:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72893 ']' 00:07:49.276 04:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72893 00:07:49.276 04:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:49.276 04:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.276 04:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72893 00:07:49.536 04:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.536 04:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.536 04:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72893' 00:07:49.536 killing process with pid 72893 00:07:49.536 04:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72893 00:07:49.536 [2024-11-21 04:05:49.260066] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:49.536 04:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72893 00:07:49.536 [2024-11-21 04:05:49.288258] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.796 04:05:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.KP9sRZYJXW 00:07:49.796 04:05:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:49.796 04:05:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:49.796 04:05:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:49.796 04:05:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:49.796 04:05:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:49.796 04:05:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:49.796 ************************************ 00:07:49.796 END TEST raid_write_error_test 00:07:49.796 ************************************ 00:07:49.796 04:05:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:49.796 00:07:49.796 real 0m3.311s 00:07:49.796 user 0m4.084s 00:07:49.796 sys 0m0.580s 00:07:49.796 04:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.796 04:05:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.796 04:05:49 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:49.796 04:05:49 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:49.796 04:05:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:49.796 04:05:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.796 04:05:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.796 ************************************ 00:07:49.796 START TEST raid_state_function_test 00:07:49.796 ************************************ 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73025 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73025' 00:07:49.796 Process raid pid: 73025 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73025 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73025 ']' 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.796 04:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.056 [2024-11-21 04:05:49.785815] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:50.056 [2024-11-21 04:05:49.786009] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.056 [2024-11-21 04:05:49.941307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.056 [2024-11-21 04:05:49.980580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.316 [2024-11-21 04:05:50.056118] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.316 [2024-11-21 04:05:50.056279] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.886 04:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.886 04:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:50.886 04:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.886 04:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.886 04:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.886 [2024-11-21 04:05:50.627126] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.886 [2024-11-21 04:05:50.627246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.886 [2024-11-21 04:05:50.627282] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.886 [2024-11-21 04:05:50.627308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.886 04:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.886 04:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:50.886 04:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.886 04:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.886 04:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:50.886 04:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.886 04:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.886 04:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.886 04:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.886 04:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.886 04:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.886 04:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.886 04:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.886 04:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.886 04:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.886 04:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.886 04:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.886 "name": "Existed_Raid", 00:07:50.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.886 "strip_size_kb": 64, 00:07:50.886 "state": "configuring", 00:07:50.886 "raid_level": "concat", 00:07:50.886 "superblock": false, 00:07:50.886 "num_base_bdevs": 2, 00:07:50.886 "num_base_bdevs_discovered": 0, 00:07:50.886 "num_base_bdevs_operational": 2, 00:07:50.886 "base_bdevs_list": [ 00:07:50.886 { 00:07:50.886 "name": "BaseBdev1", 00:07:50.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.886 "is_configured": false, 00:07:50.886 "data_offset": 0, 00:07:50.886 "data_size": 0 00:07:50.886 }, 00:07:50.886 { 00:07:50.886 "name": "BaseBdev2", 00:07:50.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.886 "is_configured": false, 00:07:50.886 "data_offset": 0, 00:07:50.886 "data_size": 0 00:07:50.886 } 00:07:50.886 ] 00:07:50.886 }' 00:07:50.886 04:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.886 04:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.180 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:51.180 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.180 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.180 [2024-11-21 04:05:51.106250] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:51.180 [2024-11-21 04:05:51.106304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:51.180 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.180 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:51.180 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.180 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.180 [2024-11-21 04:05:51.114204] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:51.180 [2024-11-21 04:05:51.114258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:51.180 [2024-11-21 04:05:51.114267] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.180 [2024-11-21 04:05:51.114292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.180 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.180 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:51.180 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.180 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.180 [2024-11-21 04:05:51.137550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.180 BaseBdev1 00:07:51.180 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.180 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:51.180 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:51.180 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:51.180 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:51.180 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:51.180 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:51.180 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:51.180 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.180 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.180 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.180 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:51.180 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.181 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.441 [ 00:07:51.441 { 00:07:51.441 "name": "BaseBdev1", 00:07:51.441 "aliases": [ 00:07:51.441 "0d4187c9-a0fd-473a-ab8a-584e33bed0a8" 00:07:51.441 ], 00:07:51.441 "product_name": "Malloc disk", 00:07:51.441 "block_size": 512, 00:07:51.441 "num_blocks": 65536, 00:07:51.441 "uuid": "0d4187c9-a0fd-473a-ab8a-584e33bed0a8", 00:07:51.441 "assigned_rate_limits": { 00:07:51.441 "rw_ios_per_sec": 0, 00:07:51.441 "rw_mbytes_per_sec": 0, 00:07:51.441 "r_mbytes_per_sec": 0, 00:07:51.441 "w_mbytes_per_sec": 0 00:07:51.441 }, 00:07:51.441 "claimed": true, 00:07:51.441 "claim_type": "exclusive_write", 00:07:51.441 "zoned": false, 00:07:51.441 "supported_io_types": { 00:07:51.441 "read": true, 00:07:51.441 "write": true, 00:07:51.441 "unmap": true, 00:07:51.441 "flush": true, 00:07:51.441 "reset": true, 00:07:51.441 "nvme_admin": false, 00:07:51.441 "nvme_io": false, 00:07:51.441 "nvme_io_md": false, 00:07:51.441 "write_zeroes": true, 00:07:51.441 "zcopy": true, 00:07:51.441 "get_zone_info": false, 00:07:51.441 "zone_management": false, 00:07:51.441 "zone_append": false, 00:07:51.441 "compare": false, 00:07:51.441 "compare_and_write": false, 00:07:51.441 "abort": true, 00:07:51.441 "seek_hole": false, 00:07:51.441 "seek_data": false, 00:07:51.441 "copy": true, 00:07:51.441 "nvme_iov_md": false 00:07:51.441 }, 00:07:51.441 "memory_domains": [ 00:07:51.441 { 00:07:51.441 "dma_device_id": "system", 00:07:51.441 "dma_device_type": 1 00:07:51.441 }, 00:07:51.441 { 00:07:51.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.441 "dma_device_type": 2 00:07:51.441 } 00:07:51.441 ], 00:07:51.441 "driver_specific": {} 00:07:51.441 } 00:07:51.441 ] 00:07:51.441 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.441 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:51.441 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:51.441 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.441 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.441 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:51.441 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.441 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.441 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.441 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.441 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.441 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.441 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.441 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.441 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.441 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.441 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.441 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.441 "name": "Existed_Raid", 00:07:51.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.441 "strip_size_kb": 64, 00:07:51.441 "state": "configuring", 00:07:51.441 "raid_level": "concat", 00:07:51.441 "superblock": false, 00:07:51.441 "num_base_bdevs": 2, 00:07:51.441 "num_base_bdevs_discovered": 1, 00:07:51.441 "num_base_bdevs_operational": 2, 00:07:51.441 "base_bdevs_list": [ 00:07:51.441 { 00:07:51.441 "name": "BaseBdev1", 00:07:51.441 "uuid": "0d4187c9-a0fd-473a-ab8a-584e33bed0a8", 00:07:51.441 "is_configured": true, 00:07:51.441 "data_offset": 0, 00:07:51.441 "data_size": 65536 00:07:51.441 }, 00:07:51.441 { 00:07:51.441 "name": "BaseBdev2", 00:07:51.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.441 "is_configured": false, 00:07:51.441 "data_offset": 0, 00:07:51.441 "data_size": 0 00:07:51.441 } 00:07:51.441 ] 00:07:51.441 }' 00:07:51.441 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.441 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.701 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:51.701 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.701 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.701 [2024-11-21 04:05:51.612847] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:51.701 [2024-11-21 04:05:51.612919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:51.701 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.701 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:51.701 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.701 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.702 [2024-11-21 04:05:51.624835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.702 [2024-11-21 04:05:51.627151] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.702 [2024-11-21 04:05:51.627269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.702 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.702 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:51.702 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:51.702 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:51.702 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.702 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.702 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:51.702 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.702 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.702 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.702 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.702 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.702 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.702 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.702 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.702 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.702 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.702 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.962 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.962 "name": "Existed_Raid", 00:07:51.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.962 "strip_size_kb": 64, 00:07:51.962 "state": "configuring", 00:07:51.962 "raid_level": "concat", 00:07:51.962 "superblock": false, 00:07:51.962 "num_base_bdevs": 2, 00:07:51.962 "num_base_bdevs_discovered": 1, 00:07:51.962 "num_base_bdevs_operational": 2, 00:07:51.962 "base_bdevs_list": [ 00:07:51.962 { 00:07:51.962 "name": "BaseBdev1", 00:07:51.962 "uuid": "0d4187c9-a0fd-473a-ab8a-584e33bed0a8", 00:07:51.962 "is_configured": true, 00:07:51.962 "data_offset": 0, 00:07:51.962 "data_size": 65536 00:07:51.962 }, 00:07:51.962 { 00:07:51.962 "name": "BaseBdev2", 00:07:51.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.962 "is_configured": false, 00:07:51.962 "data_offset": 0, 00:07:51.962 "data_size": 0 00:07:51.962 } 00:07:51.962 ] 00:07:51.962 }' 00:07:51.962 04:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.962 04:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.222 [2024-11-21 04:05:52.104783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:52.222 [2024-11-21 04:05:52.104836] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:52.222 [2024-11-21 04:05:52.104852] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:52.222 [2024-11-21 04:05:52.105150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:52.222 [2024-11-21 04:05:52.105337] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:52.222 [2024-11-21 04:05:52.105377] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:52.222 [2024-11-21 04:05:52.105643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.222 BaseBdev2 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.222 [ 00:07:52.222 { 00:07:52.222 "name": "BaseBdev2", 00:07:52.222 "aliases": [ 00:07:52.222 "99bcca40-d925-45d4-acd9-dd16c1864f16" 00:07:52.222 ], 00:07:52.222 "product_name": "Malloc disk", 00:07:52.222 "block_size": 512, 00:07:52.222 "num_blocks": 65536, 00:07:52.222 "uuid": "99bcca40-d925-45d4-acd9-dd16c1864f16", 00:07:52.222 "assigned_rate_limits": { 00:07:52.222 "rw_ios_per_sec": 0, 00:07:52.222 "rw_mbytes_per_sec": 0, 00:07:52.222 "r_mbytes_per_sec": 0, 00:07:52.222 "w_mbytes_per_sec": 0 00:07:52.222 }, 00:07:52.222 "claimed": true, 00:07:52.222 "claim_type": "exclusive_write", 00:07:52.222 "zoned": false, 00:07:52.222 "supported_io_types": { 00:07:52.222 "read": true, 00:07:52.222 "write": true, 00:07:52.222 "unmap": true, 00:07:52.222 "flush": true, 00:07:52.222 "reset": true, 00:07:52.222 "nvme_admin": false, 00:07:52.222 "nvme_io": false, 00:07:52.222 "nvme_io_md": false, 00:07:52.222 "write_zeroes": true, 00:07:52.222 "zcopy": true, 00:07:52.222 "get_zone_info": false, 00:07:52.222 "zone_management": false, 00:07:52.222 "zone_append": false, 00:07:52.222 "compare": false, 00:07:52.222 "compare_and_write": false, 00:07:52.222 "abort": true, 00:07:52.222 "seek_hole": false, 00:07:52.222 "seek_data": false, 00:07:52.222 "copy": true, 00:07:52.222 "nvme_iov_md": false 00:07:52.222 }, 00:07:52.222 "memory_domains": [ 00:07:52.222 { 00:07:52.222 "dma_device_id": "system", 00:07:52.222 "dma_device_type": 1 00:07:52.222 }, 00:07:52.222 { 00:07:52.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.222 "dma_device_type": 2 00:07:52.222 } 00:07:52.222 ], 00:07:52.222 "driver_specific": {} 00:07:52.222 } 00:07:52.222 ] 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.222 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.223 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.223 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.223 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.223 "name": "Existed_Raid", 00:07:52.223 "uuid": "905d9f7c-8551-46bc-9f04-9378addcfbc1", 00:07:52.223 "strip_size_kb": 64, 00:07:52.223 "state": "online", 00:07:52.223 "raid_level": "concat", 00:07:52.223 "superblock": false, 00:07:52.223 "num_base_bdevs": 2, 00:07:52.223 "num_base_bdevs_discovered": 2, 00:07:52.223 "num_base_bdevs_operational": 2, 00:07:52.223 "base_bdevs_list": [ 00:07:52.223 { 00:07:52.223 "name": "BaseBdev1", 00:07:52.223 "uuid": "0d4187c9-a0fd-473a-ab8a-584e33bed0a8", 00:07:52.223 "is_configured": true, 00:07:52.223 "data_offset": 0, 00:07:52.223 "data_size": 65536 00:07:52.223 }, 00:07:52.223 { 00:07:52.223 "name": "BaseBdev2", 00:07:52.223 "uuid": "99bcca40-d925-45d4-acd9-dd16c1864f16", 00:07:52.223 "is_configured": true, 00:07:52.223 "data_offset": 0, 00:07:52.223 "data_size": 65536 00:07:52.223 } 00:07:52.223 ] 00:07:52.223 }' 00:07:52.223 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.223 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.792 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:52.792 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:52.792 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:52.792 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:52.792 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:52.792 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:52.792 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:52.792 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:52.792 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.792 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.792 [2024-11-21 04:05:52.584483] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:52.792 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.792 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:52.792 "name": "Existed_Raid", 00:07:52.792 "aliases": [ 00:07:52.792 "905d9f7c-8551-46bc-9f04-9378addcfbc1" 00:07:52.792 ], 00:07:52.792 "product_name": "Raid Volume", 00:07:52.792 "block_size": 512, 00:07:52.792 "num_blocks": 131072, 00:07:52.792 "uuid": "905d9f7c-8551-46bc-9f04-9378addcfbc1", 00:07:52.792 "assigned_rate_limits": { 00:07:52.792 "rw_ios_per_sec": 0, 00:07:52.792 "rw_mbytes_per_sec": 0, 00:07:52.792 "r_mbytes_per_sec": 0, 00:07:52.792 "w_mbytes_per_sec": 0 00:07:52.792 }, 00:07:52.792 "claimed": false, 00:07:52.792 "zoned": false, 00:07:52.792 "supported_io_types": { 00:07:52.792 "read": true, 00:07:52.792 "write": true, 00:07:52.792 "unmap": true, 00:07:52.792 "flush": true, 00:07:52.792 "reset": true, 00:07:52.792 "nvme_admin": false, 00:07:52.792 "nvme_io": false, 00:07:52.792 "nvme_io_md": false, 00:07:52.792 "write_zeroes": true, 00:07:52.792 "zcopy": false, 00:07:52.792 "get_zone_info": false, 00:07:52.792 "zone_management": false, 00:07:52.792 "zone_append": false, 00:07:52.792 "compare": false, 00:07:52.792 "compare_and_write": false, 00:07:52.792 "abort": false, 00:07:52.792 "seek_hole": false, 00:07:52.792 "seek_data": false, 00:07:52.792 "copy": false, 00:07:52.792 "nvme_iov_md": false 00:07:52.792 }, 00:07:52.792 "memory_domains": [ 00:07:52.792 { 00:07:52.792 "dma_device_id": "system", 00:07:52.792 "dma_device_type": 1 00:07:52.792 }, 00:07:52.792 { 00:07:52.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.792 "dma_device_type": 2 00:07:52.792 }, 00:07:52.792 { 00:07:52.792 "dma_device_id": "system", 00:07:52.792 "dma_device_type": 1 00:07:52.792 }, 00:07:52.792 { 00:07:52.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.792 "dma_device_type": 2 00:07:52.792 } 00:07:52.792 ], 00:07:52.792 "driver_specific": { 00:07:52.792 "raid": { 00:07:52.792 "uuid": "905d9f7c-8551-46bc-9f04-9378addcfbc1", 00:07:52.792 "strip_size_kb": 64, 00:07:52.792 "state": "online", 00:07:52.792 "raid_level": "concat", 00:07:52.792 "superblock": false, 00:07:52.792 "num_base_bdevs": 2, 00:07:52.792 "num_base_bdevs_discovered": 2, 00:07:52.792 "num_base_bdevs_operational": 2, 00:07:52.792 "base_bdevs_list": [ 00:07:52.792 { 00:07:52.792 "name": "BaseBdev1", 00:07:52.792 "uuid": "0d4187c9-a0fd-473a-ab8a-584e33bed0a8", 00:07:52.792 "is_configured": true, 00:07:52.792 "data_offset": 0, 00:07:52.792 "data_size": 65536 00:07:52.792 }, 00:07:52.792 { 00:07:52.793 "name": "BaseBdev2", 00:07:52.793 "uuid": "99bcca40-d925-45d4-acd9-dd16c1864f16", 00:07:52.793 "is_configured": true, 00:07:52.793 "data_offset": 0, 00:07:52.793 "data_size": 65536 00:07:52.793 } 00:07:52.793 ] 00:07:52.793 } 00:07:52.793 } 00:07:52.793 }' 00:07:52.793 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:52.793 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:52.793 BaseBdev2' 00:07:52.793 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.793 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:52.793 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.793 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.793 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:52.793 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.793 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.793 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.793 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.793 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.793 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.793 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:52.793 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.793 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.793 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.052 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.052 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:53.052 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:53.052 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:53.052 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.053 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.053 [2024-11-21 04:05:52.811955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:53.053 [2024-11-21 04:05:52.812041] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:53.053 [2024-11-21 04:05:52.812137] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.053 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.053 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:53.053 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:53.053 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:53.053 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:53.053 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:53.053 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:53.053 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.053 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:53.053 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:53.053 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.053 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:53.053 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.053 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.053 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.053 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.053 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.053 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.053 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.053 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.053 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.053 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.053 "name": "Existed_Raid", 00:07:53.053 "uuid": "905d9f7c-8551-46bc-9f04-9378addcfbc1", 00:07:53.053 "strip_size_kb": 64, 00:07:53.053 "state": "offline", 00:07:53.053 "raid_level": "concat", 00:07:53.053 "superblock": false, 00:07:53.053 "num_base_bdevs": 2, 00:07:53.053 "num_base_bdevs_discovered": 1, 00:07:53.053 "num_base_bdevs_operational": 1, 00:07:53.053 "base_bdevs_list": [ 00:07:53.053 { 00:07:53.053 "name": null, 00:07:53.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.053 "is_configured": false, 00:07:53.053 "data_offset": 0, 00:07:53.053 "data_size": 65536 00:07:53.053 }, 00:07:53.053 { 00:07:53.053 "name": "BaseBdev2", 00:07:53.053 "uuid": "99bcca40-d925-45d4-acd9-dd16c1864f16", 00:07:53.053 "is_configured": true, 00:07:53.053 "data_offset": 0, 00:07:53.053 "data_size": 65536 00:07:53.053 } 00:07:53.053 ] 00:07:53.053 }' 00:07:53.053 04:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.053 04:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.313 04:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:53.313 04:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:53.313 04:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.313 04:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.313 04:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.313 04:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:53.313 04:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.573 [2024-11-21 04:05:53.312601] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:53.573 [2024-11-21 04:05:53.312719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73025 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73025 ']' 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73025 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73025 00:07:53.573 killing process with pid 73025 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73025' 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73025 00:07:53.573 [2024-11-21 04:05:53.432685] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:53.573 04:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73025 00:07:53.573 [2024-11-21 04:05:53.434298] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:53.832 04:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:53.832 00:07:53.832 real 0m4.064s 00:07:53.832 user 0m6.278s 00:07:53.832 sys 0m0.846s 00:07:53.832 04:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.832 04:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.832 ************************************ 00:07:53.832 END TEST raid_state_function_test 00:07:53.832 ************************************ 00:07:54.092 04:05:53 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:54.092 04:05:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:54.092 04:05:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.092 04:05:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:54.092 ************************************ 00:07:54.092 START TEST raid_state_function_test_sb 00:07:54.092 ************************************ 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73262 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73262' 00:07:54.092 Process raid pid: 73262 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73262 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73262 ']' 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.092 04:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.092 [2024-11-21 04:05:53.936733] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:54.092 [2024-11-21 04:05:53.936934] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.352 [2024-11-21 04:05:54.094906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.352 [2024-11-21 04:05:54.135473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.352 [2024-11-21 04:05:54.212396] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.352 [2024-11-21 04:05:54.212436] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.921 04:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.921 04:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:54.921 04:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:54.921 04:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.921 04:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.921 [2024-11-21 04:05:54.772239] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:54.921 [2024-11-21 04:05:54.772297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:54.922 [2024-11-21 04:05:54.772309] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:54.922 [2024-11-21 04:05:54.772321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:54.922 04:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.922 04:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:54.922 04:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.922 04:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.922 04:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:54.922 04:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.922 04:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.922 04:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.922 04:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.922 04:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.922 04:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.922 04:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.922 04:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.922 04:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.922 04:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.922 04:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.922 04:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.922 "name": "Existed_Raid", 00:07:54.922 "uuid": "7ae39f22-7d62-47a9-9dac-dc4384c4e524", 00:07:54.922 "strip_size_kb": 64, 00:07:54.922 "state": "configuring", 00:07:54.922 "raid_level": "concat", 00:07:54.922 "superblock": true, 00:07:54.922 "num_base_bdevs": 2, 00:07:54.922 "num_base_bdevs_discovered": 0, 00:07:54.922 "num_base_bdevs_operational": 2, 00:07:54.922 "base_bdevs_list": [ 00:07:54.922 { 00:07:54.922 "name": "BaseBdev1", 00:07:54.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.922 "is_configured": false, 00:07:54.922 "data_offset": 0, 00:07:54.922 "data_size": 0 00:07:54.922 }, 00:07:54.922 { 00:07:54.922 "name": "BaseBdev2", 00:07:54.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.922 "is_configured": false, 00:07:54.922 "data_offset": 0, 00:07:54.922 "data_size": 0 00:07:54.922 } 00:07:54.922 ] 00:07:54.922 }' 00:07:54.922 04:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.922 04:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.490 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:55.490 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.490 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.490 [2024-11-21 04:05:55.243339] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:55.490 [2024-11-21 04:05:55.243392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:55.490 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.490 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:55.490 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.490 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.490 [2024-11-21 04:05:55.251336] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:55.490 [2024-11-21 04:05:55.251419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:55.490 [2024-11-21 04:05:55.251449] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:55.490 [2024-11-21 04:05:55.251492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:55.490 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.490 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:55.490 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.490 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.490 [2024-11-21 04:05:55.274505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:55.490 BaseBdev1 00:07:55.490 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.490 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:55.490 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:55.490 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:55.490 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:55.490 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:55.490 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:55.490 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:55.490 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.490 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.490 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.490 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:55.490 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.490 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.490 [ 00:07:55.490 { 00:07:55.490 "name": "BaseBdev1", 00:07:55.490 "aliases": [ 00:07:55.490 "4efa43c2-a46f-4027-bd67-2666e8d21998" 00:07:55.490 ], 00:07:55.490 "product_name": "Malloc disk", 00:07:55.490 "block_size": 512, 00:07:55.490 "num_blocks": 65536, 00:07:55.490 "uuid": "4efa43c2-a46f-4027-bd67-2666e8d21998", 00:07:55.490 "assigned_rate_limits": { 00:07:55.490 "rw_ios_per_sec": 0, 00:07:55.490 "rw_mbytes_per_sec": 0, 00:07:55.490 "r_mbytes_per_sec": 0, 00:07:55.490 "w_mbytes_per_sec": 0 00:07:55.490 }, 00:07:55.490 "claimed": true, 00:07:55.490 "claim_type": "exclusive_write", 00:07:55.491 "zoned": false, 00:07:55.491 "supported_io_types": { 00:07:55.491 "read": true, 00:07:55.491 "write": true, 00:07:55.491 "unmap": true, 00:07:55.491 "flush": true, 00:07:55.491 "reset": true, 00:07:55.491 "nvme_admin": false, 00:07:55.491 "nvme_io": false, 00:07:55.491 "nvme_io_md": false, 00:07:55.491 "write_zeroes": true, 00:07:55.491 "zcopy": true, 00:07:55.491 "get_zone_info": false, 00:07:55.491 "zone_management": false, 00:07:55.491 "zone_append": false, 00:07:55.491 "compare": false, 00:07:55.491 "compare_and_write": false, 00:07:55.491 "abort": true, 00:07:55.491 "seek_hole": false, 00:07:55.491 "seek_data": false, 00:07:55.491 "copy": true, 00:07:55.491 "nvme_iov_md": false 00:07:55.491 }, 00:07:55.491 "memory_domains": [ 00:07:55.491 { 00:07:55.491 "dma_device_id": "system", 00:07:55.491 "dma_device_type": 1 00:07:55.491 }, 00:07:55.491 { 00:07:55.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.491 "dma_device_type": 2 00:07:55.491 } 00:07:55.491 ], 00:07:55.491 "driver_specific": {} 00:07:55.491 } 00:07:55.491 ] 00:07:55.491 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.491 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:55.491 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:55.491 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.491 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.491 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:55.491 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.491 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.491 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.491 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.491 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.491 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.491 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.491 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.491 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.491 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.491 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.491 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.491 "name": "Existed_Raid", 00:07:55.491 "uuid": "fa6b5bfb-b782-4003-840c-546a9b7b1784", 00:07:55.491 "strip_size_kb": 64, 00:07:55.491 "state": "configuring", 00:07:55.491 "raid_level": "concat", 00:07:55.491 "superblock": true, 00:07:55.491 "num_base_bdevs": 2, 00:07:55.491 "num_base_bdevs_discovered": 1, 00:07:55.491 "num_base_bdevs_operational": 2, 00:07:55.491 "base_bdevs_list": [ 00:07:55.491 { 00:07:55.491 "name": "BaseBdev1", 00:07:55.491 "uuid": "4efa43c2-a46f-4027-bd67-2666e8d21998", 00:07:55.491 "is_configured": true, 00:07:55.491 "data_offset": 2048, 00:07:55.491 "data_size": 63488 00:07:55.491 }, 00:07:55.491 { 00:07:55.491 "name": "BaseBdev2", 00:07:55.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.491 "is_configured": false, 00:07:55.491 "data_offset": 0, 00:07:55.491 "data_size": 0 00:07:55.491 } 00:07:55.491 ] 00:07:55.491 }' 00:07:55.491 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.491 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.059 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:56.059 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.059 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.059 [2024-11-21 04:05:55.753804] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:56.059 [2024-11-21 04:05:55.753859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:56.059 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.059 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:56.059 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.059 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.059 [2024-11-21 04:05:55.765804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:56.059 [2024-11-21 04:05:55.768297] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:56.059 [2024-11-21 04:05:55.768343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:56.059 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.059 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:56.059 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:56.059 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:56.059 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.059 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:56.059 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:56.059 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.059 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.059 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.059 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.059 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.059 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.059 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.059 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.059 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.059 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.059 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.059 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.059 "name": "Existed_Raid", 00:07:56.059 "uuid": "a2968b43-edd0-4d54-ae74-d8cbbdf1898a", 00:07:56.059 "strip_size_kb": 64, 00:07:56.059 "state": "configuring", 00:07:56.059 "raid_level": "concat", 00:07:56.059 "superblock": true, 00:07:56.059 "num_base_bdevs": 2, 00:07:56.059 "num_base_bdevs_discovered": 1, 00:07:56.059 "num_base_bdevs_operational": 2, 00:07:56.059 "base_bdevs_list": [ 00:07:56.059 { 00:07:56.059 "name": "BaseBdev1", 00:07:56.059 "uuid": "4efa43c2-a46f-4027-bd67-2666e8d21998", 00:07:56.059 "is_configured": true, 00:07:56.059 "data_offset": 2048, 00:07:56.059 "data_size": 63488 00:07:56.059 }, 00:07:56.059 { 00:07:56.059 "name": "BaseBdev2", 00:07:56.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.059 "is_configured": false, 00:07:56.059 "data_offset": 0, 00:07:56.060 "data_size": 0 00:07:56.060 } 00:07:56.060 ] 00:07:56.060 }' 00:07:56.060 04:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.060 04:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.320 [2024-11-21 04:05:56.233930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:56.320 [2024-11-21 04:05:56.234353] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:56.320 [2024-11-21 04:05:56.234409] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:56.320 [2024-11-21 04:05:56.234801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:56.320 BaseBdev2 00:07:56.320 [2024-11-21 04:05:56.235010] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:56.320 [2024-11-21 04:05:56.235030] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:56.320 [2024-11-21 04:05:56.235215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.320 [ 00:07:56.320 { 00:07:56.320 "name": "BaseBdev2", 00:07:56.320 "aliases": [ 00:07:56.320 "0c04be48-f7ee-46fa-9a07-f848afe88789" 00:07:56.320 ], 00:07:56.320 "product_name": "Malloc disk", 00:07:56.320 "block_size": 512, 00:07:56.320 "num_blocks": 65536, 00:07:56.320 "uuid": "0c04be48-f7ee-46fa-9a07-f848afe88789", 00:07:56.320 "assigned_rate_limits": { 00:07:56.320 "rw_ios_per_sec": 0, 00:07:56.320 "rw_mbytes_per_sec": 0, 00:07:56.320 "r_mbytes_per_sec": 0, 00:07:56.320 "w_mbytes_per_sec": 0 00:07:56.320 }, 00:07:56.320 "claimed": true, 00:07:56.320 "claim_type": "exclusive_write", 00:07:56.320 "zoned": false, 00:07:56.320 "supported_io_types": { 00:07:56.320 "read": true, 00:07:56.320 "write": true, 00:07:56.320 "unmap": true, 00:07:56.320 "flush": true, 00:07:56.320 "reset": true, 00:07:56.320 "nvme_admin": false, 00:07:56.320 "nvme_io": false, 00:07:56.320 "nvme_io_md": false, 00:07:56.320 "write_zeroes": true, 00:07:56.320 "zcopy": true, 00:07:56.320 "get_zone_info": false, 00:07:56.320 "zone_management": false, 00:07:56.320 "zone_append": false, 00:07:56.320 "compare": false, 00:07:56.320 "compare_and_write": false, 00:07:56.320 "abort": true, 00:07:56.320 "seek_hole": false, 00:07:56.320 "seek_data": false, 00:07:56.320 "copy": true, 00:07:56.320 "nvme_iov_md": false 00:07:56.320 }, 00:07:56.320 "memory_domains": [ 00:07:56.320 { 00:07:56.320 "dma_device_id": "system", 00:07:56.320 "dma_device_type": 1 00:07:56.320 }, 00:07:56.320 { 00:07:56.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.320 "dma_device_type": 2 00:07:56.320 } 00:07:56.320 ], 00:07:56.320 "driver_specific": {} 00:07:56.320 } 00:07:56.320 ] 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.320 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.580 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.580 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.580 "name": "Existed_Raid", 00:07:56.580 "uuid": "a2968b43-edd0-4d54-ae74-d8cbbdf1898a", 00:07:56.580 "strip_size_kb": 64, 00:07:56.580 "state": "online", 00:07:56.580 "raid_level": "concat", 00:07:56.580 "superblock": true, 00:07:56.580 "num_base_bdevs": 2, 00:07:56.580 "num_base_bdevs_discovered": 2, 00:07:56.580 "num_base_bdevs_operational": 2, 00:07:56.580 "base_bdevs_list": [ 00:07:56.580 { 00:07:56.580 "name": "BaseBdev1", 00:07:56.580 "uuid": "4efa43c2-a46f-4027-bd67-2666e8d21998", 00:07:56.580 "is_configured": true, 00:07:56.580 "data_offset": 2048, 00:07:56.580 "data_size": 63488 00:07:56.580 }, 00:07:56.580 { 00:07:56.580 "name": "BaseBdev2", 00:07:56.580 "uuid": "0c04be48-f7ee-46fa-9a07-f848afe88789", 00:07:56.580 "is_configured": true, 00:07:56.580 "data_offset": 2048, 00:07:56.580 "data_size": 63488 00:07:56.580 } 00:07:56.580 ] 00:07:56.580 }' 00:07:56.580 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.580 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.841 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:56.841 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:56.841 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:56.841 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:56.841 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:56.841 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:56.841 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:56.841 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:56.841 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.841 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.841 [2024-11-21 04:05:56.753487] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.841 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.841 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:56.841 "name": "Existed_Raid", 00:07:56.841 "aliases": [ 00:07:56.841 "a2968b43-edd0-4d54-ae74-d8cbbdf1898a" 00:07:56.841 ], 00:07:56.841 "product_name": "Raid Volume", 00:07:56.841 "block_size": 512, 00:07:56.841 "num_blocks": 126976, 00:07:56.841 "uuid": "a2968b43-edd0-4d54-ae74-d8cbbdf1898a", 00:07:56.841 "assigned_rate_limits": { 00:07:56.841 "rw_ios_per_sec": 0, 00:07:56.841 "rw_mbytes_per_sec": 0, 00:07:56.841 "r_mbytes_per_sec": 0, 00:07:56.841 "w_mbytes_per_sec": 0 00:07:56.841 }, 00:07:56.841 "claimed": false, 00:07:56.841 "zoned": false, 00:07:56.841 "supported_io_types": { 00:07:56.841 "read": true, 00:07:56.841 "write": true, 00:07:56.841 "unmap": true, 00:07:56.841 "flush": true, 00:07:56.841 "reset": true, 00:07:56.841 "nvme_admin": false, 00:07:56.841 "nvme_io": false, 00:07:56.841 "nvme_io_md": false, 00:07:56.841 "write_zeroes": true, 00:07:56.841 "zcopy": false, 00:07:56.841 "get_zone_info": false, 00:07:56.841 "zone_management": false, 00:07:56.841 "zone_append": false, 00:07:56.841 "compare": false, 00:07:56.841 "compare_and_write": false, 00:07:56.841 "abort": false, 00:07:56.841 "seek_hole": false, 00:07:56.841 "seek_data": false, 00:07:56.841 "copy": false, 00:07:56.841 "nvme_iov_md": false 00:07:56.841 }, 00:07:56.841 "memory_domains": [ 00:07:56.841 { 00:07:56.841 "dma_device_id": "system", 00:07:56.841 "dma_device_type": 1 00:07:56.841 }, 00:07:56.841 { 00:07:56.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.841 "dma_device_type": 2 00:07:56.841 }, 00:07:56.841 { 00:07:56.841 "dma_device_id": "system", 00:07:56.841 "dma_device_type": 1 00:07:56.841 }, 00:07:56.841 { 00:07:56.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.841 "dma_device_type": 2 00:07:56.841 } 00:07:56.841 ], 00:07:56.841 "driver_specific": { 00:07:56.841 "raid": { 00:07:56.841 "uuid": "a2968b43-edd0-4d54-ae74-d8cbbdf1898a", 00:07:56.841 "strip_size_kb": 64, 00:07:56.841 "state": "online", 00:07:56.841 "raid_level": "concat", 00:07:56.841 "superblock": true, 00:07:56.841 "num_base_bdevs": 2, 00:07:56.841 "num_base_bdevs_discovered": 2, 00:07:56.841 "num_base_bdevs_operational": 2, 00:07:56.841 "base_bdevs_list": [ 00:07:56.841 { 00:07:56.841 "name": "BaseBdev1", 00:07:56.841 "uuid": "4efa43c2-a46f-4027-bd67-2666e8d21998", 00:07:56.841 "is_configured": true, 00:07:56.841 "data_offset": 2048, 00:07:56.841 "data_size": 63488 00:07:56.841 }, 00:07:56.841 { 00:07:56.841 "name": "BaseBdev2", 00:07:56.841 "uuid": "0c04be48-f7ee-46fa-9a07-f848afe88789", 00:07:56.841 "is_configured": true, 00:07:56.841 "data_offset": 2048, 00:07:56.841 "data_size": 63488 00:07:56.841 } 00:07:56.841 ] 00:07:56.841 } 00:07:56.841 } 00:07:56.841 }' 00:07:56.841 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:57.101 BaseBdev2' 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.101 [2024-11-21 04:05:56.968892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:57.101 [2024-11-21 04:05:56.968925] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.101 [2024-11-21 04:05:56.968999] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:57.101 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.102 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:57.102 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.102 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.102 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.102 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.102 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.102 04:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.102 04:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.102 04:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.102 04:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.102 04:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.102 "name": "Existed_Raid", 00:07:57.102 "uuid": "a2968b43-edd0-4d54-ae74-d8cbbdf1898a", 00:07:57.102 "strip_size_kb": 64, 00:07:57.102 "state": "offline", 00:07:57.102 "raid_level": "concat", 00:07:57.102 "superblock": true, 00:07:57.102 "num_base_bdevs": 2, 00:07:57.102 "num_base_bdevs_discovered": 1, 00:07:57.102 "num_base_bdevs_operational": 1, 00:07:57.102 "base_bdevs_list": [ 00:07:57.102 { 00:07:57.102 "name": null, 00:07:57.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.102 "is_configured": false, 00:07:57.102 "data_offset": 0, 00:07:57.102 "data_size": 63488 00:07:57.102 }, 00:07:57.102 { 00:07:57.102 "name": "BaseBdev2", 00:07:57.102 "uuid": "0c04be48-f7ee-46fa-9a07-f848afe88789", 00:07:57.102 "is_configured": true, 00:07:57.102 "data_offset": 2048, 00:07:57.102 "data_size": 63488 00:07:57.102 } 00:07:57.102 ] 00:07:57.102 }' 00:07:57.102 04:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.102 04:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.674 [2024-11-21 04:05:57.472902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:57.674 [2024-11-21 04:05:57.473026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73262 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73262 ']' 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73262 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73262 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:57.674 killing process with pid 73262 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73262' 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73262 00:07:57.674 [2024-11-21 04:05:57.596620] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:57.674 04:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73262 00:07:57.674 [2024-11-21 04:05:57.598285] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:58.249 ************************************ 00:07:58.249 END TEST raid_state_function_test_sb 00:07:58.249 ************************************ 00:07:58.249 04:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:58.249 00:07:58.249 real 0m4.100s 00:07:58.249 user 0m6.318s 00:07:58.249 sys 0m0.896s 00:07:58.249 04:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.249 04:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.249 04:05:57 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:58.249 04:05:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:58.249 04:05:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.249 04:05:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:58.249 ************************************ 00:07:58.249 START TEST raid_superblock_test 00:07:58.249 ************************************ 00:07:58.249 04:05:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:58.249 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:58.249 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:58.249 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:58.249 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:58.249 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:58.249 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:58.249 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:58.249 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:58.249 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:58.249 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:58.249 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:58.249 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:58.249 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:58.249 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:58.249 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:58.249 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:58.249 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73503 00:07:58.249 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:58.249 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73503 00:07:58.249 04:05:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73503 ']' 00:07:58.249 04:05:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.249 04:05:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.249 04:05:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.249 04:05:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.249 04:05:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.249 [2024-11-21 04:05:58.111818] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:58.249 [2024-11-21 04:05:58.111976] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73503 ] 00:07:58.509 [2024-11-21 04:05:58.250027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.509 [2024-11-21 04:05:58.290719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.509 [2024-11-21 04:05:58.367011] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.509 [2024-11-21 04:05:58.367055] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:59.116 04:05:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.117 malloc1 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.117 [2024-11-21 04:05:58.973889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:59.117 [2024-11-21 04:05:58.974001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.117 [2024-11-21 04:05:58.974041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:59.117 [2024-11-21 04:05:58.974078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.117 [2024-11-21 04:05:58.976705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.117 [2024-11-21 04:05:58.976787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:59.117 pt1 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.117 04:05:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.117 malloc2 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.117 [2024-11-21 04:05:59.009299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:59.117 [2024-11-21 04:05:59.009400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.117 [2024-11-21 04:05:59.009436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:59.117 [2024-11-21 04:05:59.009467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.117 [2024-11-21 04:05:59.012020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.117 [2024-11-21 04:05:59.012091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:59.117 pt2 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.117 [2024-11-21 04:05:59.021323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:59.117 [2024-11-21 04:05:59.023575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:59.117 [2024-11-21 04:05:59.023788] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:59.117 [2024-11-21 04:05:59.023838] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:59.117 [2024-11-21 04:05:59.024183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:59.117 [2024-11-21 04:05:59.024407] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:59.117 [2024-11-21 04:05:59.024453] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:07:59.117 [2024-11-21 04:05:59.024691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.117 "name": "raid_bdev1", 00:07:59.117 "uuid": "61380cc9-e4e7-4f54-b6ab-bf55b24bd15e", 00:07:59.117 "strip_size_kb": 64, 00:07:59.117 "state": "online", 00:07:59.117 "raid_level": "concat", 00:07:59.117 "superblock": true, 00:07:59.117 "num_base_bdevs": 2, 00:07:59.117 "num_base_bdevs_discovered": 2, 00:07:59.117 "num_base_bdevs_operational": 2, 00:07:59.117 "base_bdevs_list": [ 00:07:59.117 { 00:07:59.117 "name": "pt1", 00:07:59.117 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:59.117 "is_configured": true, 00:07:59.117 "data_offset": 2048, 00:07:59.117 "data_size": 63488 00:07:59.117 }, 00:07:59.117 { 00:07:59.117 "name": "pt2", 00:07:59.117 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:59.117 "is_configured": true, 00:07:59.117 "data_offset": 2048, 00:07:59.117 "data_size": 63488 00:07:59.117 } 00:07:59.117 ] 00:07:59.117 }' 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.117 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.686 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.687 [2024-11-21 04:05:59.464939] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:59.687 "name": "raid_bdev1", 00:07:59.687 "aliases": [ 00:07:59.687 "61380cc9-e4e7-4f54-b6ab-bf55b24bd15e" 00:07:59.687 ], 00:07:59.687 "product_name": "Raid Volume", 00:07:59.687 "block_size": 512, 00:07:59.687 "num_blocks": 126976, 00:07:59.687 "uuid": "61380cc9-e4e7-4f54-b6ab-bf55b24bd15e", 00:07:59.687 "assigned_rate_limits": { 00:07:59.687 "rw_ios_per_sec": 0, 00:07:59.687 "rw_mbytes_per_sec": 0, 00:07:59.687 "r_mbytes_per_sec": 0, 00:07:59.687 "w_mbytes_per_sec": 0 00:07:59.687 }, 00:07:59.687 "claimed": false, 00:07:59.687 "zoned": false, 00:07:59.687 "supported_io_types": { 00:07:59.687 "read": true, 00:07:59.687 "write": true, 00:07:59.687 "unmap": true, 00:07:59.687 "flush": true, 00:07:59.687 "reset": true, 00:07:59.687 "nvme_admin": false, 00:07:59.687 "nvme_io": false, 00:07:59.687 "nvme_io_md": false, 00:07:59.687 "write_zeroes": true, 00:07:59.687 "zcopy": false, 00:07:59.687 "get_zone_info": false, 00:07:59.687 "zone_management": false, 00:07:59.687 "zone_append": false, 00:07:59.687 "compare": false, 00:07:59.687 "compare_and_write": false, 00:07:59.687 "abort": false, 00:07:59.687 "seek_hole": false, 00:07:59.687 "seek_data": false, 00:07:59.687 "copy": false, 00:07:59.687 "nvme_iov_md": false 00:07:59.687 }, 00:07:59.687 "memory_domains": [ 00:07:59.687 { 00:07:59.687 "dma_device_id": "system", 00:07:59.687 "dma_device_type": 1 00:07:59.687 }, 00:07:59.687 { 00:07:59.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.687 "dma_device_type": 2 00:07:59.687 }, 00:07:59.687 { 00:07:59.687 "dma_device_id": "system", 00:07:59.687 "dma_device_type": 1 00:07:59.687 }, 00:07:59.687 { 00:07:59.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.687 "dma_device_type": 2 00:07:59.687 } 00:07:59.687 ], 00:07:59.687 "driver_specific": { 00:07:59.687 "raid": { 00:07:59.687 "uuid": "61380cc9-e4e7-4f54-b6ab-bf55b24bd15e", 00:07:59.687 "strip_size_kb": 64, 00:07:59.687 "state": "online", 00:07:59.687 "raid_level": "concat", 00:07:59.687 "superblock": true, 00:07:59.687 "num_base_bdevs": 2, 00:07:59.687 "num_base_bdevs_discovered": 2, 00:07:59.687 "num_base_bdevs_operational": 2, 00:07:59.687 "base_bdevs_list": [ 00:07:59.687 { 00:07:59.687 "name": "pt1", 00:07:59.687 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:59.687 "is_configured": true, 00:07:59.687 "data_offset": 2048, 00:07:59.687 "data_size": 63488 00:07:59.687 }, 00:07:59.687 { 00:07:59.687 "name": "pt2", 00:07:59.687 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:59.687 "is_configured": true, 00:07:59.687 "data_offset": 2048, 00:07:59.687 "data_size": 63488 00:07:59.687 } 00:07:59.687 ] 00:07:59.687 } 00:07:59.687 } 00:07:59.687 }' 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:59.687 pt2' 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.687 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.948 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.948 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.948 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:59.948 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:59.948 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.948 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.948 [2024-11-21 04:05:59.688543] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.948 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.948 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=61380cc9-e4e7-4f54-b6ab-bf55b24bd15e 00:07:59.948 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 61380cc9-e4e7-4f54-b6ab-bf55b24bd15e ']' 00:07:59.948 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:59.948 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.948 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.948 [2024-11-21 04:05:59.720233] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:59.948 [2024-11-21 04:05:59.720303] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.948 [2024-11-21 04:05:59.720424] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.948 [2024-11-21 04:05:59.720533] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:59.948 [2024-11-21 04:05:59.720600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:59.948 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.948 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.948 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:59.948 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.948 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.948 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.948 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:59.948 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:59.948 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:59.948 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:59.948 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.948 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.949 [2024-11-21 04:05:59.848120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:59.949 [2024-11-21 04:05:59.850558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:59.949 [2024-11-21 04:05:59.850691] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:59.949 [2024-11-21 04:05:59.850842] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:59.949 [2024-11-21 04:05:59.850904] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:59.949 [2024-11-21 04:05:59.850948] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:59.949 request: 00:07:59.949 { 00:07:59.949 "name": "raid_bdev1", 00:07:59.949 "raid_level": "concat", 00:07:59.949 "base_bdevs": [ 00:07:59.949 "malloc1", 00:07:59.949 "malloc2" 00:07:59.949 ], 00:07:59.949 "strip_size_kb": 64, 00:07:59.949 "superblock": false, 00:07:59.949 "method": "bdev_raid_create", 00:07:59.949 "req_id": 1 00:07:59.949 } 00:07:59.949 Got JSON-RPC error response 00:07:59.949 response: 00:07:59.949 { 00:07:59.949 "code": -17, 00:07:59.949 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:59.949 } 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.949 [2024-11-21 04:05:59.911962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:59.949 [2024-11-21 04:05:59.912061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.949 [2024-11-21 04:05:59.912099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:59.949 [2024-11-21 04:05:59.912127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.949 [2024-11-21 04:05:59.914732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.949 [2024-11-21 04:05:59.914802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:59.949 [2024-11-21 04:05:59.914899] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:59.949 [2024-11-21 04:05:59.914973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:59.949 pt1 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.949 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.209 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.209 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.209 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.209 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.209 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.209 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.209 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.209 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.209 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.209 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.209 "name": "raid_bdev1", 00:08:00.209 "uuid": "61380cc9-e4e7-4f54-b6ab-bf55b24bd15e", 00:08:00.209 "strip_size_kb": 64, 00:08:00.209 "state": "configuring", 00:08:00.209 "raid_level": "concat", 00:08:00.209 "superblock": true, 00:08:00.209 "num_base_bdevs": 2, 00:08:00.209 "num_base_bdevs_discovered": 1, 00:08:00.209 "num_base_bdevs_operational": 2, 00:08:00.209 "base_bdevs_list": [ 00:08:00.209 { 00:08:00.209 "name": "pt1", 00:08:00.209 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:00.209 "is_configured": true, 00:08:00.209 "data_offset": 2048, 00:08:00.209 "data_size": 63488 00:08:00.209 }, 00:08:00.209 { 00:08:00.209 "name": null, 00:08:00.209 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:00.209 "is_configured": false, 00:08:00.209 "data_offset": 2048, 00:08:00.209 "data_size": 63488 00:08:00.209 } 00:08:00.209 ] 00:08:00.209 }' 00:08:00.209 04:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.209 04:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.469 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:00.469 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:00.469 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:00.469 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:00.469 04:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.469 04:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.469 [2024-11-21 04:06:00.375259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:00.470 [2024-11-21 04:06:00.375378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.470 [2024-11-21 04:06:00.375423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:00.470 [2024-11-21 04:06:00.375456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.470 [2024-11-21 04:06:00.375996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.470 [2024-11-21 04:06:00.376063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:00.470 [2024-11-21 04:06:00.376198] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:00.470 [2024-11-21 04:06:00.376271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:00.470 [2024-11-21 04:06:00.376416] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:00.470 [2024-11-21 04:06:00.376455] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:00.470 [2024-11-21 04:06:00.376793] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:00.470 [2024-11-21 04:06:00.376966] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:00.470 [2024-11-21 04:06:00.377017] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:00.470 [2024-11-21 04:06:00.377182] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.470 pt2 00:08:00.470 04:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.470 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:00.470 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:00.470 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:00.470 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:00.470 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.470 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:00.470 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.470 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.470 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.470 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.470 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.470 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.470 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.470 04:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.470 04:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.470 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.470 04:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.470 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.470 "name": "raid_bdev1", 00:08:00.470 "uuid": "61380cc9-e4e7-4f54-b6ab-bf55b24bd15e", 00:08:00.470 "strip_size_kb": 64, 00:08:00.470 "state": "online", 00:08:00.470 "raid_level": "concat", 00:08:00.470 "superblock": true, 00:08:00.470 "num_base_bdevs": 2, 00:08:00.470 "num_base_bdevs_discovered": 2, 00:08:00.470 "num_base_bdevs_operational": 2, 00:08:00.470 "base_bdevs_list": [ 00:08:00.470 { 00:08:00.470 "name": "pt1", 00:08:00.470 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:00.470 "is_configured": true, 00:08:00.470 "data_offset": 2048, 00:08:00.470 "data_size": 63488 00:08:00.470 }, 00:08:00.470 { 00:08:00.470 "name": "pt2", 00:08:00.470 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:00.470 "is_configured": true, 00:08:00.470 "data_offset": 2048, 00:08:00.470 "data_size": 63488 00:08:00.470 } 00:08:00.470 ] 00:08:00.470 }' 00:08:00.470 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.470 04:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.040 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:01.040 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:01.040 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:01.040 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:01.040 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:01.040 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:01.040 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:01.040 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:01.040 04:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.040 04:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.040 [2024-11-21 04:06:00.834731] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:01.040 04:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.040 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:01.040 "name": "raid_bdev1", 00:08:01.040 "aliases": [ 00:08:01.040 "61380cc9-e4e7-4f54-b6ab-bf55b24bd15e" 00:08:01.040 ], 00:08:01.040 "product_name": "Raid Volume", 00:08:01.040 "block_size": 512, 00:08:01.040 "num_blocks": 126976, 00:08:01.040 "uuid": "61380cc9-e4e7-4f54-b6ab-bf55b24bd15e", 00:08:01.040 "assigned_rate_limits": { 00:08:01.040 "rw_ios_per_sec": 0, 00:08:01.040 "rw_mbytes_per_sec": 0, 00:08:01.040 "r_mbytes_per_sec": 0, 00:08:01.040 "w_mbytes_per_sec": 0 00:08:01.040 }, 00:08:01.040 "claimed": false, 00:08:01.040 "zoned": false, 00:08:01.040 "supported_io_types": { 00:08:01.040 "read": true, 00:08:01.040 "write": true, 00:08:01.040 "unmap": true, 00:08:01.040 "flush": true, 00:08:01.040 "reset": true, 00:08:01.040 "nvme_admin": false, 00:08:01.040 "nvme_io": false, 00:08:01.040 "nvme_io_md": false, 00:08:01.040 "write_zeroes": true, 00:08:01.040 "zcopy": false, 00:08:01.040 "get_zone_info": false, 00:08:01.040 "zone_management": false, 00:08:01.040 "zone_append": false, 00:08:01.040 "compare": false, 00:08:01.040 "compare_and_write": false, 00:08:01.040 "abort": false, 00:08:01.040 "seek_hole": false, 00:08:01.040 "seek_data": false, 00:08:01.040 "copy": false, 00:08:01.040 "nvme_iov_md": false 00:08:01.040 }, 00:08:01.040 "memory_domains": [ 00:08:01.040 { 00:08:01.040 "dma_device_id": "system", 00:08:01.040 "dma_device_type": 1 00:08:01.040 }, 00:08:01.040 { 00:08:01.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.040 "dma_device_type": 2 00:08:01.040 }, 00:08:01.040 { 00:08:01.040 "dma_device_id": "system", 00:08:01.040 "dma_device_type": 1 00:08:01.040 }, 00:08:01.040 { 00:08:01.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.040 "dma_device_type": 2 00:08:01.040 } 00:08:01.040 ], 00:08:01.040 "driver_specific": { 00:08:01.040 "raid": { 00:08:01.040 "uuid": "61380cc9-e4e7-4f54-b6ab-bf55b24bd15e", 00:08:01.040 "strip_size_kb": 64, 00:08:01.040 "state": "online", 00:08:01.040 "raid_level": "concat", 00:08:01.040 "superblock": true, 00:08:01.040 "num_base_bdevs": 2, 00:08:01.040 "num_base_bdevs_discovered": 2, 00:08:01.040 "num_base_bdevs_operational": 2, 00:08:01.040 "base_bdevs_list": [ 00:08:01.040 { 00:08:01.040 "name": "pt1", 00:08:01.040 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:01.040 "is_configured": true, 00:08:01.040 "data_offset": 2048, 00:08:01.040 "data_size": 63488 00:08:01.040 }, 00:08:01.040 { 00:08:01.040 "name": "pt2", 00:08:01.040 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:01.040 "is_configured": true, 00:08:01.040 "data_offset": 2048, 00:08:01.040 "data_size": 63488 00:08:01.040 } 00:08:01.040 ] 00:08:01.040 } 00:08:01.040 } 00:08:01.040 }' 00:08:01.040 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:01.040 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:01.040 pt2' 00:08:01.040 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:01.040 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:01.040 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:01.040 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:01.040 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:01.040 04:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.040 04:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.040 04:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.040 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:01.040 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:01.040 04:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:01.040 04:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:01.040 04:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:01.040 04:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.040 04:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.301 04:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.301 04:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:01.301 04:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:01.301 04:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:01.301 04:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.301 04:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.301 04:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:01.301 [2024-11-21 04:06:01.058421] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:01.301 04:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.301 04:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 61380cc9-e4e7-4f54-b6ab-bf55b24bd15e '!=' 61380cc9-e4e7-4f54-b6ab-bf55b24bd15e ']' 00:08:01.301 04:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:01.301 04:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:01.301 04:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:01.301 04:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73503 00:08:01.301 04:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73503 ']' 00:08:01.301 04:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73503 00:08:01.301 04:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:01.301 04:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.301 04:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73503 00:08:01.301 killing process with pid 73503 00:08:01.301 04:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:01.301 04:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:01.301 04:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73503' 00:08:01.301 04:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 73503 00:08:01.301 [2024-11-21 04:06:01.125370] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:01.301 [2024-11-21 04:06:01.125490] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:01.301 04:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 73503 00:08:01.301 [2024-11-21 04:06:01.125551] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:01.301 [2024-11-21 04:06:01.125561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:01.301 [2024-11-21 04:06:01.168170] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:01.561 04:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:01.561 00:08:01.561 real 0m3.485s 00:08:01.561 user 0m5.200s 00:08:01.561 sys 0m0.813s 00:08:01.561 ************************************ 00:08:01.561 END TEST raid_superblock_test 00:08:01.561 ************************************ 00:08:01.561 04:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.561 04:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.821 04:06:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:01.821 04:06:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:01.821 04:06:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.821 04:06:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:01.821 ************************************ 00:08:01.821 START TEST raid_read_error_test 00:08:01.821 ************************************ 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zK73JBKEtH 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73709 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73709 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73709 ']' 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.821 04:06:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.821 [2024-11-21 04:06:01.682705] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:01.821 [2024-11-21 04:06:01.682869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73709 ] 00:08:02.081 [2024-11-21 04:06:01.839946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.081 [2024-11-21 04:06:01.880527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.081 [2024-11-21 04:06:01.958923] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.081 [2024-11-21 04:06:01.958983] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.651 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.651 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:02.651 04:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:02.651 04:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:02.651 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.651 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.651 BaseBdev1_malloc 00:08:02.651 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.651 04:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:02.651 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.651 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.651 true 00:08:02.651 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.651 04:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:02.651 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.651 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.651 [2024-11-21 04:06:02.590103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:02.651 [2024-11-21 04:06:02.590211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.651 [2024-11-21 04:06:02.590266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:02.651 [2024-11-21 04:06:02.590308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.651 [2024-11-21 04:06:02.592808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.651 [2024-11-21 04:06:02.592882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:02.651 BaseBdev1 00:08:02.651 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.651 04:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:02.651 04:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:02.651 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.651 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.651 BaseBdev2_malloc 00:08:02.651 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.651 04:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:02.651 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.651 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.912 true 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.912 [2024-11-21 04:06:02.636827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:02.912 [2024-11-21 04:06:02.636882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.912 [2024-11-21 04:06:02.636903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:02.912 [2024-11-21 04:06:02.636922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.912 [2024-11-21 04:06:02.639362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.912 [2024-11-21 04:06:02.639404] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:02.912 BaseBdev2 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.912 [2024-11-21 04:06:02.648868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:02.912 [2024-11-21 04:06:02.651082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:02.912 [2024-11-21 04:06:02.651356] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:02.912 [2024-11-21 04:06:02.651411] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:02.912 [2024-11-21 04:06:02.651760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:02.912 [2024-11-21 04:06:02.651949] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:02.912 [2024-11-21 04:06:02.651997] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:02.912 [2024-11-21 04:06:02.652229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.912 "name": "raid_bdev1", 00:08:02.912 "uuid": "8d4c33d4-66fe-436d-bc44-04efbc7feafd", 00:08:02.912 "strip_size_kb": 64, 00:08:02.912 "state": "online", 00:08:02.912 "raid_level": "concat", 00:08:02.912 "superblock": true, 00:08:02.912 "num_base_bdevs": 2, 00:08:02.912 "num_base_bdevs_discovered": 2, 00:08:02.912 "num_base_bdevs_operational": 2, 00:08:02.912 "base_bdevs_list": [ 00:08:02.912 { 00:08:02.912 "name": "BaseBdev1", 00:08:02.912 "uuid": "b40afca1-a27c-5a30-bac4-190116e714ff", 00:08:02.912 "is_configured": true, 00:08:02.912 "data_offset": 2048, 00:08:02.912 "data_size": 63488 00:08:02.912 }, 00:08:02.912 { 00:08:02.912 "name": "BaseBdev2", 00:08:02.912 "uuid": "55e8fa4c-c547-5a34-b40e-0a4e72e7ff0a", 00:08:02.912 "is_configured": true, 00:08:02.912 "data_offset": 2048, 00:08:02.912 "data_size": 63488 00:08:02.912 } 00:08:02.912 ] 00:08:02.912 }' 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.912 04:06:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.172 04:06:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:03.172 04:06:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:03.433 [2024-11-21 04:06:03.156486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:08:04.374 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:04.374 04:06:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.374 04:06:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.374 04:06:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.374 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:04.374 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:04.374 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:04.374 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:04.374 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.374 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.374 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:04.374 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.374 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.374 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.374 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.374 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.374 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.374 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.374 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.374 04:06:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.374 04:06:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.374 04:06:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.374 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.374 "name": "raid_bdev1", 00:08:04.374 "uuid": "8d4c33d4-66fe-436d-bc44-04efbc7feafd", 00:08:04.374 "strip_size_kb": 64, 00:08:04.374 "state": "online", 00:08:04.374 "raid_level": "concat", 00:08:04.374 "superblock": true, 00:08:04.374 "num_base_bdevs": 2, 00:08:04.374 "num_base_bdevs_discovered": 2, 00:08:04.374 "num_base_bdevs_operational": 2, 00:08:04.374 "base_bdevs_list": [ 00:08:04.374 { 00:08:04.374 "name": "BaseBdev1", 00:08:04.374 "uuid": "b40afca1-a27c-5a30-bac4-190116e714ff", 00:08:04.374 "is_configured": true, 00:08:04.374 "data_offset": 2048, 00:08:04.374 "data_size": 63488 00:08:04.374 }, 00:08:04.374 { 00:08:04.374 "name": "BaseBdev2", 00:08:04.374 "uuid": "55e8fa4c-c547-5a34-b40e-0a4e72e7ff0a", 00:08:04.374 "is_configured": true, 00:08:04.374 "data_offset": 2048, 00:08:04.374 "data_size": 63488 00:08:04.374 } 00:08:04.374 ] 00:08:04.374 }' 00:08:04.374 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.374 04:06:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.635 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:04.635 04:06:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.635 04:06:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.635 [2024-11-21 04:06:04.537800] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:04.635 [2024-11-21 04:06:04.537881] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:04.635 [2024-11-21 04:06:04.540540] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.635 [2024-11-21 04:06:04.540622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.635 [2024-11-21 04:06:04.540707] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.635 [2024-11-21 04:06:04.540753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:04.635 04:06:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.635 { 00:08:04.635 "results": [ 00:08:04.635 { 00:08:04.635 "job": "raid_bdev1", 00:08:04.635 "core_mask": "0x1", 00:08:04.635 "workload": "randrw", 00:08:04.635 "percentage": 50, 00:08:04.635 "status": "finished", 00:08:04.635 "queue_depth": 1, 00:08:04.635 "io_size": 131072, 00:08:04.635 "runtime": 1.381921, 00:08:04.635 "iops": 14526.155981420066, 00:08:04.635 "mibps": 1815.7694976775083, 00:08:04.635 "io_failed": 1, 00:08:04.635 "io_timeout": 0, 00:08:04.635 "avg_latency_us": 96.34038712905208, 00:08:04.635 "min_latency_us": 25.152838427947597, 00:08:04.635 "max_latency_us": 1430.9170305676855 00:08:04.635 } 00:08:04.635 ], 00:08:04.635 "core_count": 1 00:08:04.635 } 00:08:04.635 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73709 00:08:04.635 04:06:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73709 ']' 00:08:04.635 04:06:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73709 00:08:04.635 04:06:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:04.635 04:06:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.635 04:06:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73709 00:08:04.635 04:06:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.635 04:06:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.635 04:06:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73709' 00:08:04.635 killing process with pid 73709 00:08:04.635 04:06:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73709 00:08:04.635 [2024-11-21 04:06:04.590323] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:04.636 04:06:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73709 00:08:04.896 [2024-11-21 04:06:04.620926] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:05.156 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:05.156 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zK73JBKEtH 00:08:05.156 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:05.156 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:05.156 ************************************ 00:08:05.156 END TEST raid_read_error_test 00:08:05.156 ************************************ 00:08:05.156 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:05.156 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:05.156 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:05.156 04:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:05.156 00:08:05.156 real 0m3.386s 00:08:05.156 user 0m4.162s 00:08:05.156 sys 0m0.663s 00:08:05.156 04:06:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.156 04:06:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.156 04:06:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:05.156 04:06:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:05.156 04:06:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.156 04:06:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:05.156 ************************************ 00:08:05.156 START TEST raid_write_error_test 00:08:05.156 ************************************ 00:08:05.156 04:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:08:05.156 04:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:05.156 04:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:05.156 04:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:05.156 04:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:05.156 04:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:05.156 04:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:05.156 04:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:05.156 04:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:05.156 04:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:05.156 04:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:05.156 04:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:05.156 04:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:05.156 04:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:05.156 04:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:05.157 04:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:05.157 04:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:05.157 04:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:05.157 04:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:05.157 04:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:05.157 04:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:05.157 04:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:05.157 04:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:05.157 04:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DOa0hJsOxY 00:08:05.157 04:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73838 00:08:05.157 04:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:05.157 04:06:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73838 00:08:05.157 04:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73838 ']' 00:08:05.157 04:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.157 04:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.157 04:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.157 04:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.157 04:06:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.417 [2024-11-21 04:06:05.138314] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:05.417 [2024-11-21 04:06:05.138554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73838 ] 00:08:05.417 [2024-11-21 04:06:05.293565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.417 [2024-11-21 04:06:05.340129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.677 [2024-11-21 04:06:05.419067] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.677 [2024-11-21 04:06:05.419112] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.246 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.246 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:06.246 04:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:06.246 04:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:06.246 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.247 BaseBdev1_malloc 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.247 true 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.247 [2024-11-21 04:06:06.096051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:06.247 [2024-11-21 04:06:06.096187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.247 [2024-11-21 04:06:06.096256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:06.247 [2024-11-21 04:06:06.096318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.247 [2024-11-21 04:06:06.098949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.247 [2024-11-21 04:06:06.099039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:06.247 BaseBdev1 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.247 BaseBdev2_malloc 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.247 true 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.247 [2024-11-21 04:06:06.143310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:06.247 [2024-11-21 04:06:06.143436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.247 [2024-11-21 04:06:06.143478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:06.247 [2024-11-21 04:06:06.143524] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.247 [2024-11-21 04:06:06.146144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.247 [2024-11-21 04:06:06.146237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:06.247 BaseBdev2 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.247 [2024-11-21 04:06:06.155361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.247 [2024-11-21 04:06:06.157543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:06.247 [2024-11-21 04:06:06.157762] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:06.247 [2024-11-21 04:06:06.157777] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:06.247 [2024-11-21 04:06:06.158107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:06.247 [2024-11-21 04:06:06.158280] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:06.247 [2024-11-21 04:06:06.158295] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:06.247 [2024-11-21 04:06:06.158455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.247 "name": "raid_bdev1", 00:08:06.247 "uuid": "38e083a0-6ab0-49f0-b794-4245c7935c57", 00:08:06.247 "strip_size_kb": 64, 00:08:06.247 "state": "online", 00:08:06.247 "raid_level": "concat", 00:08:06.247 "superblock": true, 00:08:06.247 "num_base_bdevs": 2, 00:08:06.247 "num_base_bdevs_discovered": 2, 00:08:06.247 "num_base_bdevs_operational": 2, 00:08:06.247 "base_bdevs_list": [ 00:08:06.247 { 00:08:06.247 "name": "BaseBdev1", 00:08:06.247 "uuid": "a24bc40b-a548-521e-9fea-739b4f988492", 00:08:06.247 "is_configured": true, 00:08:06.247 "data_offset": 2048, 00:08:06.247 "data_size": 63488 00:08:06.247 }, 00:08:06.247 { 00:08:06.247 "name": "BaseBdev2", 00:08:06.247 "uuid": "7ff3ba51-44a2-5769-944c-5c5d2a3e583a", 00:08:06.247 "is_configured": true, 00:08:06.247 "data_offset": 2048, 00:08:06.247 "data_size": 63488 00:08:06.247 } 00:08:06.247 ] 00:08:06.247 }' 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.247 04:06:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.817 04:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:06.817 04:06:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:06.817 [2024-11-21 04:06:06.726899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:08:07.756 04:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:07.756 04:06:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.756 04:06:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.756 04:06:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.756 04:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:07.756 04:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:07.756 04:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:07.756 04:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:07.756 04:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:07.756 04:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.756 04:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:07.756 04:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.756 04:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.756 04:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.756 04:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.756 04:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.756 04:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.756 04:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.756 04:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.756 04:06:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.756 04:06:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.756 04:06:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.756 04:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.756 "name": "raid_bdev1", 00:08:07.756 "uuid": "38e083a0-6ab0-49f0-b794-4245c7935c57", 00:08:07.756 "strip_size_kb": 64, 00:08:07.756 "state": "online", 00:08:07.756 "raid_level": "concat", 00:08:07.756 "superblock": true, 00:08:07.756 "num_base_bdevs": 2, 00:08:07.756 "num_base_bdevs_discovered": 2, 00:08:07.756 "num_base_bdevs_operational": 2, 00:08:07.756 "base_bdevs_list": [ 00:08:07.756 { 00:08:07.756 "name": "BaseBdev1", 00:08:07.756 "uuid": "a24bc40b-a548-521e-9fea-739b4f988492", 00:08:07.756 "is_configured": true, 00:08:07.756 "data_offset": 2048, 00:08:07.756 "data_size": 63488 00:08:07.756 }, 00:08:07.756 { 00:08:07.756 "name": "BaseBdev2", 00:08:07.756 "uuid": "7ff3ba51-44a2-5769-944c-5c5d2a3e583a", 00:08:07.756 "is_configured": true, 00:08:07.756 "data_offset": 2048, 00:08:07.756 "data_size": 63488 00:08:07.756 } 00:08:07.756 ] 00:08:07.756 }' 00:08:07.756 04:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.756 04:06:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.326 04:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:08.326 04:06:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.326 04:06:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.326 [2024-11-21 04:06:08.103618] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:08.326 [2024-11-21 04:06:08.103714] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.326 [2024-11-21 04:06:08.106322] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.326 [2024-11-21 04:06:08.106411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.326 [2024-11-21 04:06:08.106471] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:08.326 [2024-11-21 04:06:08.106510] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:08.326 { 00:08:08.326 "results": [ 00:08:08.326 { 00:08:08.326 "job": "raid_bdev1", 00:08:08.326 "core_mask": "0x1", 00:08:08.326 "workload": "randrw", 00:08:08.326 "percentage": 50, 00:08:08.326 "status": "finished", 00:08:08.326 "queue_depth": 1, 00:08:08.326 "io_size": 131072, 00:08:08.326 "runtime": 1.377161, 00:08:08.326 "iops": 14876.256298283208, 00:08:08.326 "mibps": 1859.532037285401, 00:08:08.326 "io_failed": 1, 00:08:08.326 "io_timeout": 0, 00:08:08.326 "avg_latency_us": 94.17532520900507, 00:08:08.326 "min_latency_us": 24.705676855895195, 00:08:08.326 "max_latency_us": 1452.380786026201 00:08:08.326 } 00:08:08.326 ], 00:08:08.326 "core_count": 1 00:08:08.326 } 00:08:08.326 04:06:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.326 04:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73838 00:08:08.326 04:06:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73838 ']' 00:08:08.326 04:06:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73838 00:08:08.326 04:06:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:08.326 04:06:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.326 04:06:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73838 00:08:08.326 04:06:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:08.326 04:06:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:08.326 04:06:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73838' 00:08:08.326 killing process with pid 73838 00:08:08.326 04:06:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73838 00:08:08.326 [2024-11-21 04:06:08.155550] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:08.326 04:06:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73838 00:08:08.326 [2024-11-21 04:06:08.185241] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:08.585 04:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DOa0hJsOxY 00:08:08.586 04:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:08.586 04:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:08.586 04:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:08.586 04:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:08.586 04:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:08.586 04:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:08.586 ************************************ 00:08:08.586 END TEST raid_write_error_test 00:08:08.586 ************************************ 00:08:08.586 04:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:08.586 00:08:08.586 real 0m3.488s 00:08:08.586 user 0m4.395s 00:08:08.586 sys 0m0.638s 00:08:08.586 04:06:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.586 04:06:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.857 04:06:08 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:08.857 04:06:08 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:08.857 04:06:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:08.857 04:06:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.857 04:06:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:08.857 ************************************ 00:08:08.857 START TEST raid_state_function_test 00:08:08.857 ************************************ 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:08.857 Process raid pid: 73965 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73965 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73965' 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73965 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73965 ']' 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.857 04:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.857 [2024-11-21 04:06:08.684866] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:08.857 [2024-11-21 04:06:08.685093] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.118 [2024-11-21 04:06:08.826256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.118 [2024-11-21 04:06:08.864907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.118 [2024-11-21 04:06:08.940976] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.118 [2024-11-21 04:06:08.941111] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.687 04:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.687 04:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:09.687 04:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:09.687 04:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.687 04:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.687 [2024-11-21 04:06:09.556439] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:09.687 [2024-11-21 04:06:09.556548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:09.687 [2024-11-21 04:06:09.556578] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.687 [2024-11-21 04:06:09.556602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.687 04:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.687 04:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:09.687 04:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.687 04:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.687 04:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:09.687 04:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:09.687 04:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.687 04:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.687 04:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.687 04:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.687 04:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.687 04:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.687 04:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.687 04:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.687 04:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.687 04:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.687 04:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.687 "name": "Existed_Raid", 00:08:09.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.687 "strip_size_kb": 0, 00:08:09.687 "state": "configuring", 00:08:09.687 "raid_level": "raid1", 00:08:09.687 "superblock": false, 00:08:09.687 "num_base_bdevs": 2, 00:08:09.687 "num_base_bdevs_discovered": 0, 00:08:09.687 "num_base_bdevs_operational": 2, 00:08:09.687 "base_bdevs_list": [ 00:08:09.687 { 00:08:09.687 "name": "BaseBdev1", 00:08:09.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.687 "is_configured": false, 00:08:09.687 "data_offset": 0, 00:08:09.687 "data_size": 0 00:08:09.687 }, 00:08:09.687 { 00:08:09.687 "name": "BaseBdev2", 00:08:09.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.687 "is_configured": false, 00:08:09.687 "data_offset": 0, 00:08:09.687 "data_size": 0 00:08:09.687 } 00:08:09.687 ] 00:08:09.687 }' 00:08:09.687 04:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.687 04:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.255 04:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:10.255 04:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.255 04:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.255 [2024-11-21 04:06:09.975683] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:10.255 [2024-11-21 04:06:09.975776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:10.255 04:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.255 04:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:10.255 04:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.255 04:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.255 [2024-11-21 04:06:09.987642] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:10.255 [2024-11-21 04:06:09.987726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:10.255 [2024-11-21 04:06:09.987754] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.255 [2024-11-21 04:06:09.987790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.255 04:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.255 04:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:10.255 04:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.255 04:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.255 [2024-11-21 04:06:10.014881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.255 BaseBdev1 00:08:10.255 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.255 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:10.255 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:10.255 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:10.255 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:10.255 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:10.255 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:10.255 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:10.255 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.255 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.255 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.255 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:10.255 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.255 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.255 [ 00:08:10.255 { 00:08:10.255 "name": "BaseBdev1", 00:08:10.255 "aliases": [ 00:08:10.255 "ef1980b1-6ac7-40d4-820e-a4a89191fef2" 00:08:10.255 ], 00:08:10.255 "product_name": "Malloc disk", 00:08:10.255 "block_size": 512, 00:08:10.255 "num_blocks": 65536, 00:08:10.255 "uuid": "ef1980b1-6ac7-40d4-820e-a4a89191fef2", 00:08:10.255 "assigned_rate_limits": { 00:08:10.255 "rw_ios_per_sec": 0, 00:08:10.255 "rw_mbytes_per_sec": 0, 00:08:10.255 "r_mbytes_per_sec": 0, 00:08:10.255 "w_mbytes_per_sec": 0 00:08:10.255 }, 00:08:10.255 "claimed": true, 00:08:10.255 "claim_type": "exclusive_write", 00:08:10.255 "zoned": false, 00:08:10.255 "supported_io_types": { 00:08:10.255 "read": true, 00:08:10.255 "write": true, 00:08:10.255 "unmap": true, 00:08:10.255 "flush": true, 00:08:10.255 "reset": true, 00:08:10.255 "nvme_admin": false, 00:08:10.255 "nvme_io": false, 00:08:10.255 "nvme_io_md": false, 00:08:10.255 "write_zeroes": true, 00:08:10.255 "zcopy": true, 00:08:10.255 "get_zone_info": false, 00:08:10.255 "zone_management": false, 00:08:10.255 "zone_append": false, 00:08:10.255 "compare": false, 00:08:10.255 "compare_and_write": false, 00:08:10.255 "abort": true, 00:08:10.255 "seek_hole": false, 00:08:10.255 "seek_data": false, 00:08:10.255 "copy": true, 00:08:10.255 "nvme_iov_md": false 00:08:10.255 }, 00:08:10.255 "memory_domains": [ 00:08:10.256 { 00:08:10.256 "dma_device_id": "system", 00:08:10.256 "dma_device_type": 1 00:08:10.256 }, 00:08:10.256 { 00:08:10.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.256 "dma_device_type": 2 00:08:10.256 } 00:08:10.256 ], 00:08:10.256 "driver_specific": {} 00:08:10.256 } 00:08:10.256 ] 00:08:10.256 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.256 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:10.256 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:10.256 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.256 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.256 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:10.256 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:10.256 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.256 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.256 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.256 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.256 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.256 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.256 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.256 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.256 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.256 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.256 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.256 "name": "Existed_Raid", 00:08:10.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.256 "strip_size_kb": 0, 00:08:10.256 "state": "configuring", 00:08:10.256 "raid_level": "raid1", 00:08:10.256 "superblock": false, 00:08:10.256 "num_base_bdevs": 2, 00:08:10.256 "num_base_bdevs_discovered": 1, 00:08:10.256 "num_base_bdevs_operational": 2, 00:08:10.256 "base_bdevs_list": [ 00:08:10.256 { 00:08:10.256 "name": "BaseBdev1", 00:08:10.256 "uuid": "ef1980b1-6ac7-40d4-820e-a4a89191fef2", 00:08:10.256 "is_configured": true, 00:08:10.256 "data_offset": 0, 00:08:10.256 "data_size": 65536 00:08:10.256 }, 00:08:10.256 { 00:08:10.256 "name": "BaseBdev2", 00:08:10.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.256 "is_configured": false, 00:08:10.256 "data_offset": 0, 00:08:10.256 "data_size": 0 00:08:10.256 } 00:08:10.256 ] 00:08:10.256 }' 00:08:10.256 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.256 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.515 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:10.515 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.515 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.515 [2024-11-21 04:06:10.478175] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:10.516 [2024-11-21 04:06:10.478297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:10.516 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.516 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:10.516 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.516 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.774 [2024-11-21 04:06:10.490165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.775 [2024-11-21 04:06:10.492401] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.775 [2024-11-21 04:06:10.492478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.775 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.775 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:10.775 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:10.775 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:10.775 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.775 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.775 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:10.775 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:10.775 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.775 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.775 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.775 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.775 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.775 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.775 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.775 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.775 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.775 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.775 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.775 "name": "Existed_Raid", 00:08:10.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.775 "strip_size_kb": 0, 00:08:10.775 "state": "configuring", 00:08:10.775 "raid_level": "raid1", 00:08:10.775 "superblock": false, 00:08:10.775 "num_base_bdevs": 2, 00:08:10.775 "num_base_bdevs_discovered": 1, 00:08:10.775 "num_base_bdevs_operational": 2, 00:08:10.775 "base_bdevs_list": [ 00:08:10.775 { 00:08:10.775 "name": "BaseBdev1", 00:08:10.775 "uuid": "ef1980b1-6ac7-40d4-820e-a4a89191fef2", 00:08:10.775 "is_configured": true, 00:08:10.775 "data_offset": 0, 00:08:10.775 "data_size": 65536 00:08:10.775 }, 00:08:10.775 { 00:08:10.775 "name": "BaseBdev2", 00:08:10.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.775 "is_configured": false, 00:08:10.775 "data_offset": 0, 00:08:10.775 "data_size": 0 00:08:10.775 } 00:08:10.775 ] 00:08:10.775 }' 00:08:10.775 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.775 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.035 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:11.035 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.035 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.035 [2024-11-21 04:06:10.962927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:11.035 BaseBdev2 00:08:11.035 [2024-11-21 04:06:10.963091] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:11.035 [2024-11-21 04:06:10.963104] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:11.035 [2024-11-21 04:06:10.963495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:11.035 [2024-11-21 04:06:10.963660] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:11.035 [2024-11-21 04:06:10.963676] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:11.035 [2024-11-21 04:06:10.963919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.035 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.035 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:11.035 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:11.035 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:11.035 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:11.035 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:11.036 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:11.036 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:11.036 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.036 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.036 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.036 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:11.036 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.036 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.036 [ 00:08:11.036 { 00:08:11.036 "name": "BaseBdev2", 00:08:11.036 "aliases": [ 00:08:11.036 "fd8294a7-f28d-4df3-bbec-8fb6ad6bcfae" 00:08:11.036 ], 00:08:11.036 "product_name": "Malloc disk", 00:08:11.036 "block_size": 512, 00:08:11.036 "num_blocks": 65536, 00:08:11.036 "uuid": "fd8294a7-f28d-4df3-bbec-8fb6ad6bcfae", 00:08:11.036 "assigned_rate_limits": { 00:08:11.036 "rw_ios_per_sec": 0, 00:08:11.036 "rw_mbytes_per_sec": 0, 00:08:11.036 "r_mbytes_per_sec": 0, 00:08:11.036 "w_mbytes_per_sec": 0 00:08:11.036 }, 00:08:11.036 "claimed": true, 00:08:11.036 "claim_type": "exclusive_write", 00:08:11.036 "zoned": false, 00:08:11.036 "supported_io_types": { 00:08:11.036 "read": true, 00:08:11.036 "write": true, 00:08:11.036 "unmap": true, 00:08:11.036 "flush": true, 00:08:11.036 "reset": true, 00:08:11.036 "nvme_admin": false, 00:08:11.036 "nvme_io": false, 00:08:11.036 "nvme_io_md": false, 00:08:11.036 "write_zeroes": true, 00:08:11.036 "zcopy": true, 00:08:11.036 "get_zone_info": false, 00:08:11.036 "zone_management": false, 00:08:11.036 "zone_append": false, 00:08:11.036 "compare": false, 00:08:11.036 "compare_and_write": false, 00:08:11.036 "abort": true, 00:08:11.036 "seek_hole": false, 00:08:11.036 "seek_data": false, 00:08:11.036 "copy": true, 00:08:11.036 "nvme_iov_md": false 00:08:11.036 }, 00:08:11.036 "memory_domains": [ 00:08:11.036 { 00:08:11.036 "dma_device_id": "system", 00:08:11.036 "dma_device_type": 1 00:08:11.036 }, 00:08:11.036 { 00:08:11.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.036 "dma_device_type": 2 00:08:11.036 } 00:08:11.036 ], 00:08:11.036 "driver_specific": {} 00:08:11.036 } 00:08:11.036 ] 00:08:11.036 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.036 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:11.036 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:11.036 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:11.036 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:11.036 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.036 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.036 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:11.036 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:11.036 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.036 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.036 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.036 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.036 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.036 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.036 04:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.036 04:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.036 04:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.297 04:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.297 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.297 "name": "Existed_Raid", 00:08:11.297 "uuid": "e6413c77-fdf6-4200-8a7e-6608c281b5f1", 00:08:11.297 "strip_size_kb": 0, 00:08:11.297 "state": "online", 00:08:11.297 "raid_level": "raid1", 00:08:11.297 "superblock": false, 00:08:11.297 "num_base_bdevs": 2, 00:08:11.297 "num_base_bdevs_discovered": 2, 00:08:11.297 "num_base_bdevs_operational": 2, 00:08:11.297 "base_bdevs_list": [ 00:08:11.297 { 00:08:11.297 "name": "BaseBdev1", 00:08:11.297 "uuid": "ef1980b1-6ac7-40d4-820e-a4a89191fef2", 00:08:11.297 "is_configured": true, 00:08:11.297 "data_offset": 0, 00:08:11.297 "data_size": 65536 00:08:11.297 }, 00:08:11.297 { 00:08:11.297 "name": "BaseBdev2", 00:08:11.297 "uuid": "fd8294a7-f28d-4df3-bbec-8fb6ad6bcfae", 00:08:11.297 "is_configured": true, 00:08:11.297 "data_offset": 0, 00:08:11.297 "data_size": 65536 00:08:11.297 } 00:08:11.297 ] 00:08:11.297 }' 00:08:11.297 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.297 04:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.557 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:11.557 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:11.557 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:11.557 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:11.557 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:11.557 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:11.557 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:11.557 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:11.557 04:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.557 04:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.557 [2024-11-21 04:06:11.438478] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.557 04:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.557 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:11.557 "name": "Existed_Raid", 00:08:11.557 "aliases": [ 00:08:11.557 "e6413c77-fdf6-4200-8a7e-6608c281b5f1" 00:08:11.557 ], 00:08:11.557 "product_name": "Raid Volume", 00:08:11.557 "block_size": 512, 00:08:11.557 "num_blocks": 65536, 00:08:11.557 "uuid": "e6413c77-fdf6-4200-8a7e-6608c281b5f1", 00:08:11.557 "assigned_rate_limits": { 00:08:11.557 "rw_ios_per_sec": 0, 00:08:11.557 "rw_mbytes_per_sec": 0, 00:08:11.557 "r_mbytes_per_sec": 0, 00:08:11.557 "w_mbytes_per_sec": 0 00:08:11.557 }, 00:08:11.557 "claimed": false, 00:08:11.557 "zoned": false, 00:08:11.557 "supported_io_types": { 00:08:11.557 "read": true, 00:08:11.557 "write": true, 00:08:11.557 "unmap": false, 00:08:11.557 "flush": false, 00:08:11.557 "reset": true, 00:08:11.557 "nvme_admin": false, 00:08:11.557 "nvme_io": false, 00:08:11.557 "nvme_io_md": false, 00:08:11.557 "write_zeroes": true, 00:08:11.557 "zcopy": false, 00:08:11.557 "get_zone_info": false, 00:08:11.557 "zone_management": false, 00:08:11.557 "zone_append": false, 00:08:11.557 "compare": false, 00:08:11.557 "compare_and_write": false, 00:08:11.557 "abort": false, 00:08:11.557 "seek_hole": false, 00:08:11.557 "seek_data": false, 00:08:11.557 "copy": false, 00:08:11.557 "nvme_iov_md": false 00:08:11.557 }, 00:08:11.557 "memory_domains": [ 00:08:11.557 { 00:08:11.557 "dma_device_id": "system", 00:08:11.557 "dma_device_type": 1 00:08:11.557 }, 00:08:11.557 { 00:08:11.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.557 "dma_device_type": 2 00:08:11.557 }, 00:08:11.557 { 00:08:11.557 "dma_device_id": "system", 00:08:11.557 "dma_device_type": 1 00:08:11.557 }, 00:08:11.557 { 00:08:11.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.557 "dma_device_type": 2 00:08:11.557 } 00:08:11.557 ], 00:08:11.557 "driver_specific": { 00:08:11.557 "raid": { 00:08:11.557 "uuid": "e6413c77-fdf6-4200-8a7e-6608c281b5f1", 00:08:11.557 "strip_size_kb": 0, 00:08:11.557 "state": "online", 00:08:11.557 "raid_level": "raid1", 00:08:11.557 "superblock": false, 00:08:11.557 "num_base_bdevs": 2, 00:08:11.557 "num_base_bdevs_discovered": 2, 00:08:11.557 "num_base_bdevs_operational": 2, 00:08:11.557 "base_bdevs_list": [ 00:08:11.557 { 00:08:11.557 "name": "BaseBdev1", 00:08:11.557 "uuid": "ef1980b1-6ac7-40d4-820e-a4a89191fef2", 00:08:11.557 "is_configured": true, 00:08:11.557 "data_offset": 0, 00:08:11.557 "data_size": 65536 00:08:11.557 }, 00:08:11.557 { 00:08:11.557 "name": "BaseBdev2", 00:08:11.557 "uuid": "fd8294a7-f28d-4df3-bbec-8fb6ad6bcfae", 00:08:11.557 "is_configured": true, 00:08:11.557 "data_offset": 0, 00:08:11.557 "data_size": 65536 00:08:11.557 } 00:08:11.557 ] 00:08:11.557 } 00:08:11.557 } 00:08:11.557 }' 00:08:11.557 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:11.557 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:11.557 BaseBdev2' 00:08:11.557 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.817 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:11.817 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.817 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:11.817 04:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.817 04:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.817 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.817 04:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.817 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.817 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.817 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.817 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.817 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:11.817 04:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.817 04:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.817 04:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.817 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.817 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.818 [2024-11-21 04:06:11.657835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.818 "name": "Existed_Raid", 00:08:11.818 "uuid": "e6413c77-fdf6-4200-8a7e-6608c281b5f1", 00:08:11.818 "strip_size_kb": 0, 00:08:11.818 "state": "online", 00:08:11.818 "raid_level": "raid1", 00:08:11.818 "superblock": false, 00:08:11.818 "num_base_bdevs": 2, 00:08:11.818 "num_base_bdevs_discovered": 1, 00:08:11.818 "num_base_bdevs_operational": 1, 00:08:11.818 "base_bdevs_list": [ 00:08:11.818 { 00:08:11.818 "name": null, 00:08:11.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.818 "is_configured": false, 00:08:11.818 "data_offset": 0, 00:08:11.818 "data_size": 65536 00:08:11.818 }, 00:08:11.818 { 00:08:11.818 "name": "BaseBdev2", 00:08:11.818 "uuid": "fd8294a7-f28d-4df3-bbec-8fb6ad6bcfae", 00:08:11.818 "is_configured": true, 00:08:11.818 "data_offset": 0, 00:08:11.818 "data_size": 65536 00:08:11.818 } 00:08:11.818 ] 00:08:11.818 }' 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.818 04:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.386 04:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:12.386 04:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.387 [2024-11-21 04:06:12.169813] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:12.387 [2024-11-21 04:06:12.169970] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.387 [2024-11-21 04:06:12.190878] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.387 [2024-11-21 04:06:12.191035] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:12.387 [2024-11-21 04:06:12.191078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73965 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73965 ']' 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73965 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73965 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73965' 00:08:12.387 killing process with pid 73965 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73965 00:08:12.387 [2024-11-21 04:06:12.270112] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:12.387 04:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73965 00:08:12.387 [2024-11-21 04:06:12.271680] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:12.646 04:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:12.646 00:08:12.646 real 0m4.010s 00:08:12.646 user 0m6.109s 00:08:12.646 sys 0m0.910s 00:08:12.646 04:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.646 04:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.646 ************************************ 00:08:12.646 END TEST raid_state_function_test 00:08:12.646 ************************************ 00:08:12.906 04:06:12 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:12.906 04:06:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:12.906 04:06:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.906 04:06:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:12.906 ************************************ 00:08:12.906 START TEST raid_state_function_test_sb 00:08:12.906 ************************************ 00:08:12.906 04:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:08:12.906 04:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:12.906 04:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:12.906 04:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:12.906 04:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:12.906 04:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74207 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:12.907 Process raid pid: 74207 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74207' 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74207 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74207 ']' 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.907 04:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.907 [2024-11-21 04:06:12.770857] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:12.907 [2024-11-21 04:06:12.771082] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.167 [2024-11-21 04:06:12.929100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.167 [2024-11-21 04:06:12.974670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.167 [2024-11-21 04:06:13.052076] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.167 [2024-11-21 04:06:13.052120] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.737 04:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.737 04:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:13.737 04:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:13.737 04:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.737 04:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.737 [2024-11-21 04:06:13.600155] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:13.737 [2024-11-21 04:06:13.600277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:13.737 [2024-11-21 04:06:13.600312] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:13.737 [2024-11-21 04:06:13.600340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:13.737 04:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.737 04:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:13.737 04:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.737 04:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.737 04:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:13.737 04:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:13.737 04:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.737 04:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.737 04:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.737 04:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.737 04:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.737 04:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.737 04:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.737 04:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.737 04:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.737 04:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.737 04:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.737 "name": "Existed_Raid", 00:08:13.737 "uuid": "2dedd096-932d-4d78-811e-236380cc4a1c", 00:08:13.737 "strip_size_kb": 0, 00:08:13.737 "state": "configuring", 00:08:13.737 "raid_level": "raid1", 00:08:13.737 "superblock": true, 00:08:13.737 "num_base_bdevs": 2, 00:08:13.737 "num_base_bdevs_discovered": 0, 00:08:13.737 "num_base_bdevs_operational": 2, 00:08:13.737 "base_bdevs_list": [ 00:08:13.737 { 00:08:13.737 "name": "BaseBdev1", 00:08:13.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.737 "is_configured": false, 00:08:13.737 "data_offset": 0, 00:08:13.737 "data_size": 0 00:08:13.737 }, 00:08:13.737 { 00:08:13.737 "name": "BaseBdev2", 00:08:13.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.737 "is_configured": false, 00:08:13.737 "data_offset": 0, 00:08:13.737 "data_size": 0 00:08:13.737 } 00:08:13.737 ] 00:08:13.737 }' 00:08:13.737 04:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.737 04:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.308 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:14.308 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.308 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.308 [2024-11-21 04:06:14.055334] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:14.308 [2024-11-21 04:06:14.055441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:14.308 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.308 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:14.308 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.308 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.308 [2024-11-21 04:06:14.067305] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:14.308 [2024-11-21 04:06:14.067393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:14.308 [2024-11-21 04:06:14.067439] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:14.308 [2024-11-21 04:06:14.067480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:14.308 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.308 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:14.308 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.308 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.308 [2024-11-21 04:06:14.095429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.308 BaseBdev1 00:08:14.308 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.308 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:14.308 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:14.308 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:14.308 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:14.308 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:14.308 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:14.308 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:14.308 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.308 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.308 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.309 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:14.309 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.309 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.309 [ 00:08:14.309 { 00:08:14.309 "name": "BaseBdev1", 00:08:14.309 "aliases": [ 00:08:14.309 "0ff672ed-1e1b-41a3-909d-f499b63d6b42" 00:08:14.309 ], 00:08:14.309 "product_name": "Malloc disk", 00:08:14.309 "block_size": 512, 00:08:14.309 "num_blocks": 65536, 00:08:14.309 "uuid": "0ff672ed-1e1b-41a3-909d-f499b63d6b42", 00:08:14.309 "assigned_rate_limits": { 00:08:14.309 "rw_ios_per_sec": 0, 00:08:14.309 "rw_mbytes_per_sec": 0, 00:08:14.309 "r_mbytes_per_sec": 0, 00:08:14.309 "w_mbytes_per_sec": 0 00:08:14.309 }, 00:08:14.309 "claimed": true, 00:08:14.309 "claim_type": "exclusive_write", 00:08:14.309 "zoned": false, 00:08:14.309 "supported_io_types": { 00:08:14.309 "read": true, 00:08:14.309 "write": true, 00:08:14.309 "unmap": true, 00:08:14.309 "flush": true, 00:08:14.309 "reset": true, 00:08:14.309 "nvme_admin": false, 00:08:14.309 "nvme_io": false, 00:08:14.309 "nvme_io_md": false, 00:08:14.309 "write_zeroes": true, 00:08:14.309 "zcopy": true, 00:08:14.309 "get_zone_info": false, 00:08:14.309 "zone_management": false, 00:08:14.309 "zone_append": false, 00:08:14.309 "compare": false, 00:08:14.309 "compare_and_write": false, 00:08:14.309 "abort": true, 00:08:14.309 "seek_hole": false, 00:08:14.309 "seek_data": false, 00:08:14.309 "copy": true, 00:08:14.309 "nvme_iov_md": false 00:08:14.309 }, 00:08:14.309 "memory_domains": [ 00:08:14.309 { 00:08:14.309 "dma_device_id": "system", 00:08:14.309 "dma_device_type": 1 00:08:14.309 }, 00:08:14.309 { 00:08:14.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.309 "dma_device_type": 2 00:08:14.309 } 00:08:14.309 ], 00:08:14.309 "driver_specific": {} 00:08:14.309 } 00:08:14.309 ] 00:08:14.309 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.309 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:14.309 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:14.309 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.309 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.309 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:14.309 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:14.309 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.309 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.309 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.309 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.309 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.309 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.309 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.309 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.309 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.309 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.309 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.309 "name": "Existed_Raid", 00:08:14.309 "uuid": "cc25889f-6502-40f9-a240-25a46c7485e2", 00:08:14.309 "strip_size_kb": 0, 00:08:14.309 "state": "configuring", 00:08:14.309 "raid_level": "raid1", 00:08:14.309 "superblock": true, 00:08:14.309 "num_base_bdevs": 2, 00:08:14.309 "num_base_bdevs_discovered": 1, 00:08:14.309 "num_base_bdevs_operational": 2, 00:08:14.309 "base_bdevs_list": [ 00:08:14.309 { 00:08:14.309 "name": "BaseBdev1", 00:08:14.309 "uuid": "0ff672ed-1e1b-41a3-909d-f499b63d6b42", 00:08:14.309 "is_configured": true, 00:08:14.309 "data_offset": 2048, 00:08:14.309 "data_size": 63488 00:08:14.309 }, 00:08:14.309 { 00:08:14.309 "name": "BaseBdev2", 00:08:14.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.309 "is_configured": false, 00:08:14.309 "data_offset": 0, 00:08:14.309 "data_size": 0 00:08:14.309 } 00:08:14.309 ] 00:08:14.309 }' 00:08:14.309 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.309 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.884 [2024-11-21 04:06:14.618608] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:14.884 [2024-11-21 04:06:14.618753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.884 [2024-11-21 04:06:14.626629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.884 [2024-11-21 04:06:14.628942] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:14.884 [2024-11-21 04:06:14.629028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.884 "name": "Existed_Raid", 00:08:14.884 "uuid": "9c59c2f7-cfce-416f-84da-5d96238fe67d", 00:08:14.884 "strip_size_kb": 0, 00:08:14.884 "state": "configuring", 00:08:14.884 "raid_level": "raid1", 00:08:14.884 "superblock": true, 00:08:14.884 "num_base_bdevs": 2, 00:08:14.884 "num_base_bdevs_discovered": 1, 00:08:14.884 "num_base_bdevs_operational": 2, 00:08:14.884 "base_bdevs_list": [ 00:08:14.884 { 00:08:14.884 "name": "BaseBdev1", 00:08:14.884 "uuid": "0ff672ed-1e1b-41a3-909d-f499b63d6b42", 00:08:14.884 "is_configured": true, 00:08:14.884 "data_offset": 2048, 00:08:14.884 "data_size": 63488 00:08:14.884 }, 00:08:14.884 { 00:08:14.884 "name": "BaseBdev2", 00:08:14.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.884 "is_configured": false, 00:08:14.884 "data_offset": 0, 00:08:14.884 "data_size": 0 00:08:14.884 } 00:08:14.884 ] 00:08:14.884 }' 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.884 04:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.144 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:15.144 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.144 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.144 [2024-11-21 04:06:15.054644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:15.144 [2024-11-21 04:06:15.054865] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:15.144 [2024-11-21 04:06:15.054881] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:15.144 [2024-11-21 04:06:15.055184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:15.144 [2024-11-21 04:06:15.055399] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:15.144 [2024-11-21 04:06:15.055416] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:15.144 BaseBdev2 00:08:15.144 [2024-11-21 04:06:15.055545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.144 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.144 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:15.144 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:15.144 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:15.144 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:15.144 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:15.144 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:15.144 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:15.144 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.144 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.144 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.144 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:15.144 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.144 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.144 [ 00:08:15.144 { 00:08:15.144 "name": "BaseBdev2", 00:08:15.144 "aliases": [ 00:08:15.144 "7ba6f3c6-1700-4b73-9cbb-1d4956840e9f" 00:08:15.144 ], 00:08:15.144 "product_name": "Malloc disk", 00:08:15.144 "block_size": 512, 00:08:15.144 "num_blocks": 65536, 00:08:15.144 "uuid": "7ba6f3c6-1700-4b73-9cbb-1d4956840e9f", 00:08:15.144 "assigned_rate_limits": { 00:08:15.144 "rw_ios_per_sec": 0, 00:08:15.144 "rw_mbytes_per_sec": 0, 00:08:15.144 "r_mbytes_per_sec": 0, 00:08:15.144 "w_mbytes_per_sec": 0 00:08:15.144 }, 00:08:15.144 "claimed": true, 00:08:15.144 "claim_type": "exclusive_write", 00:08:15.144 "zoned": false, 00:08:15.145 "supported_io_types": { 00:08:15.145 "read": true, 00:08:15.145 "write": true, 00:08:15.145 "unmap": true, 00:08:15.145 "flush": true, 00:08:15.145 "reset": true, 00:08:15.145 "nvme_admin": false, 00:08:15.145 "nvme_io": false, 00:08:15.145 "nvme_io_md": false, 00:08:15.145 "write_zeroes": true, 00:08:15.145 "zcopy": true, 00:08:15.145 "get_zone_info": false, 00:08:15.145 "zone_management": false, 00:08:15.145 "zone_append": false, 00:08:15.145 "compare": false, 00:08:15.145 "compare_and_write": false, 00:08:15.145 "abort": true, 00:08:15.145 "seek_hole": false, 00:08:15.145 "seek_data": false, 00:08:15.145 "copy": true, 00:08:15.145 "nvme_iov_md": false 00:08:15.145 }, 00:08:15.145 "memory_domains": [ 00:08:15.145 { 00:08:15.145 "dma_device_id": "system", 00:08:15.145 "dma_device_type": 1 00:08:15.145 }, 00:08:15.145 { 00:08:15.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.145 "dma_device_type": 2 00:08:15.145 } 00:08:15.145 ], 00:08:15.145 "driver_specific": {} 00:08:15.145 } 00:08:15.145 ] 00:08:15.145 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.145 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:15.145 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:15.145 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:15.145 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:15.145 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.145 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.145 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:15.145 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:15.145 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:15.145 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.145 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.145 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.145 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.145 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.145 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.145 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.145 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.405 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.405 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.405 "name": "Existed_Raid", 00:08:15.405 "uuid": "9c59c2f7-cfce-416f-84da-5d96238fe67d", 00:08:15.405 "strip_size_kb": 0, 00:08:15.405 "state": "online", 00:08:15.405 "raid_level": "raid1", 00:08:15.405 "superblock": true, 00:08:15.405 "num_base_bdevs": 2, 00:08:15.405 "num_base_bdevs_discovered": 2, 00:08:15.405 "num_base_bdevs_operational": 2, 00:08:15.405 "base_bdevs_list": [ 00:08:15.405 { 00:08:15.405 "name": "BaseBdev1", 00:08:15.405 "uuid": "0ff672ed-1e1b-41a3-909d-f499b63d6b42", 00:08:15.405 "is_configured": true, 00:08:15.405 "data_offset": 2048, 00:08:15.405 "data_size": 63488 00:08:15.405 }, 00:08:15.405 { 00:08:15.405 "name": "BaseBdev2", 00:08:15.405 "uuid": "7ba6f3c6-1700-4b73-9cbb-1d4956840e9f", 00:08:15.405 "is_configured": true, 00:08:15.405 "data_offset": 2048, 00:08:15.405 "data_size": 63488 00:08:15.405 } 00:08:15.405 ] 00:08:15.405 }' 00:08:15.405 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.405 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.665 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:15.665 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:15.665 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:15.665 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:15.665 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:15.665 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:15.665 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:15.665 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:15.665 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.665 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.665 [2024-11-21 04:06:15.554154] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.665 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.665 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:15.665 "name": "Existed_Raid", 00:08:15.665 "aliases": [ 00:08:15.665 "9c59c2f7-cfce-416f-84da-5d96238fe67d" 00:08:15.665 ], 00:08:15.665 "product_name": "Raid Volume", 00:08:15.665 "block_size": 512, 00:08:15.665 "num_blocks": 63488, 00:08:15.665 "uuid": "9c59c2f7-cfce-416f-84da-5d96238fe67d", 00:08:15.665 "assigned_rate_limits": { 00:08:15.665 "rw_ios_per_sec": 0, 00:08:15.665 "rw_mbytes_per_sec": 0, 00:08:15.665 "r_mbytes_per_sec": 0, 00:08:15.665 "w_mbytes_per_sec": 0 00:08:15.665 }, 00:08:15.665 "claimed": false, 00:08:15.665 "zoned": false, 00:08:15.665 "supported_io_types": { 00:08:15.665 "read": true, 00:08:15.665 "write": true, 00:08:15.665 "unmap": false, 00:08:15.665 "flush": false, 00:08:15.665 "reset": true, 00:08:15.665 "nvme_admin": false, 00:08:15.665 "nvme_io": false, 00:08:15.665 "nvme_io_md": false, 00:08:15.665 "write_zeroes": true, 00:08:15.665 "zcopy": false, 00:08:15.665 "get_zone_info": false, 00:08:15.666 "zone_management": false, 00:08:15.666 "zone_append": false, 00:08:15.666 "compare": false, 00:08:15.666 "compare_and_write": false, 00:08:15.666 "abort": false, 00:08:15.666 "seek_hole": false, 00:08:15.666 "seek_data": false, 00:08:15.666 "copy": false, 00:08:15.666 "nvme_iov_md": false 00:08:15.666 }, 00:08:15.666 "memory_domains": [ 00:08:15.666 { 00:08:15.666 "dma_device_id": "system", 00:08:15.666 "dma_device_type": 1 00:08:15.666 }, 00:08:15.666 { 00:08:15.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.666 "dma_device_type": 2 00:08:15.666 }, 00:08:15.666 { 00:08:15.666 "dma_device_id": "system", 00:08:15.666 "dma_device_type": 1 00:08:15.666 }, 00:08:15.666 { 00:08:15.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.666 "dma_device_type": 2 00:08:15.666 } 00:08:15.666 ], 00:08:15.666 "driver_specific": { 00:08:15.666 "raid": { 00:08:15.666 "uuid": "9c59c2f7-cfce-416f-84da-5d96238fe67d", 00:08:15.666 "strip_size_kb": 0, 00:08:15.666 "state": "online", 00:08:15.666 "raid_level": "raid1", 00:08:15.666 "superblock": true, 00:08:15.666 "num_base_bdevs": 2, 00:08:15.666 "num_base_bdevs_discovered": 2, 00:08:15.666 "num_base_bdevs_operational": 2, 00:08:15.666 "base_bdevs_list": [ 00:08:15.666 { 00:08:15.666 "name": "BaseBdev1", 00:08:15.666 "uuid": "0ff672ed-1e1b-41a3-909d-f499b63d6b42", 00:08:15.666 "is_configured": true, 00:08:15.666 "data_offset": 2048, 00:08:15.666 "data_size": 63488 00:08:15.666 }, 00:08:15.666 { 00:08:15.666 "name": "BaseBdev2", 00:08:15.666 "uuid": "7ba6f3c6-1700-4b73-9cbb-1d4956840e9f", 00:08:15.666 "is_configured": true, 00:08:15.666 "data_offset": 2048, 00:08:15.666 "data_size": 63488 00:08:15.666 } 00:08:15.666 ] 00:08:15.666 } 00:08:15.666 } 00:08:15.666 }' 00:08:15.666 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:15.666 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:15.666 BaseBdev2' 00:08:15.666 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.926 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:15.926 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.926 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.926 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:15.926 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.926 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.926 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.926 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.926 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.926 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.926 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:15.926 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.926 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.926 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.926 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.926 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.926 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.926 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:15.926 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.927 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.927 [2024-11-21 04:06:15.773513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:15.927 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.927 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:15.927 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:15.927 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:15.927 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:15.927 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:15.927 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:15.927 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.927 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.927 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:15.927 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:15.927 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:15.927 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.927 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.927 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.927 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.927 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.927 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.927 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.927 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.927 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.927 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.927 "name": "Existed_Raid", 00:08:15.927 "uuid": "9c59c2f7-cfce-416f-84da-5d96238fe67d", 00:08:15.927 "strip_size_kb": 0, 00:08:15.927 "state": "online", 00:08:15.927 "raid_level": "raid1", 00:08:15.927 "superblock": true, 00:08:15.927 "num_base_bdevs": 2, 00:08:15.927 "num_base_bdevs_discovered": 1, 00:08:15.927 "num_base_bdevs_operational": 1, 00:08:15.927 "base_bdevs_list": [ 00:08:15.927 { 00:08:15.927 "name": null, 00:08:15.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.927 "is_configured": false, 00:08:15.927 "data_offset": 0, 00:08:15.927 "data_size": 63488 00:08:15.927 }, 00:08:15.927 { 00:08:15.927 "name": "BaseBdev2", 00:08:15.927 "uuid": "7ba6f3c6-1700-4b73-9cbb-1d4956840e9f", 00:08:15.927 "is_configured": true, 00:08:15.927 "data_offset": 2048, 00:08:15.927 "data_size": 63488 00:08:15.927 } 00:08:15.927 ] 00:08:15.927 }' 00:08:15.927 04:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.927 04:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.497 [2024-11-21 04:06:16.277354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:16.497 [2024-11-21 04:06:16.277530] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:16.497 [2024-11-21 04:06:16.298819] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.497 [2024-11-21 04:06:16.298883] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.497 [2024-11-21 04:06:16.298895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74207 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74207 ']' 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74207 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74207 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:16.497 killing process with pid 74207 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74207' 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74207 00:08:16.497 [2024-11-21 04:06:16.400688] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:16.497 04:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74207 00:08:16.497 [2024-11-21 04:06:16.402410] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:17.065 04:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:17.065 00:08:17.065 real 0m4.060s 00:08:17.065 user 0m6.253s 00:08:17.065 sys 0m0.878s 00:08:17.065 ************************************ 00:08:17.065 END TEST raid_state_function_test_sb 00:08:17.065 ************************************ 00:08:17.065 04:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.065 04:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.065 04:06:16 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:17.065 04:06:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:17.065 04:06:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.065 04:06:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:17.065 ************************************ 00:08:17.066 START TEST raid_superblock_test 00:08:17.066 ************************************ 00:08:17.066 04:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:17.066 04:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:17.066 04:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:17.066 04:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:17.066 04:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:17.066 04:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:17.066 04:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:17.066 04:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:17.066 04:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:17.066 04:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:17.066 04:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:17.066 04:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:17.066 04:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:17.066 04:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:17.066 04:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:17.066 04:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:17.066 04:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74448 00:08:17.066 04:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74448 00:08:17.066 04:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:17.066 04:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74448 ']' 00:08:17.066 04:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.066 04:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.066 04:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.066 04:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.066 04:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.066 [2024-11-21 04:06:16.904403] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:17.066 [2024-11-21 04:06:16.904688] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74448 ] 00:08:17.324 [2024-11-21 04:06:17.063953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.324 [2024-11-21 04:06:17.109947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.324 [2024-11-21 04:06:17.188975] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.324 [2024-11-21 04:06:17.189122] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.893 04:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.893 04:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:17.893 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:17.893 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:17.893 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.894 malloc1 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.894 [2024-11-21 04:06:17.769285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:17.894 [2024-11-21 04:06:17.769405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.894 [2024-11-21 04:06:17.769484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:08:17.894 [2024-11-21 04:06:17.769552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.894 [2024-11-21 04:06:17.771961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.894 [2024-11-21 04:06:17.772060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:17.894 pt1 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.894 malloc2 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.894 [2024-11-21 04:06:17.804063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:17.894 [2024-11-21 04:06:17.804157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.894 [2024-11-21 04:06:17.804192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:17.894 [2024-11-21 04:06:17.804271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.894 [2024-11-21 04:06:17.806624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.894 [2024-11-21 04:06:17.806695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:17.894 pt2 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.894 [2024-11-21 04:06:17.816083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:17.894 [2024-11-21 04:06:17.818173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:17.894 [2024-11-21 04:06:17.818411] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:08:17.894 [2024-11-21 04:06:17.818466] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:17.894 [2024-11-21 04:06:17.818821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:17.894 [2024-11-21 04:06:17.819033] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:08:17.894 [2024-11-21 04:06:17.819079] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:08:17.894 [2024-11-21 04:06:17.819287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.894 "name": "raid_bdev1", 00:08:17.894 "uuid": "7440eea8-1cd3-49a1-ba6a-7b0a1ce889b3", 00:08:17.894 "strip_size_kb": 0, 00:08:17.894 "state": "online", 00:08:17.894 "raid_level": "raid1", 00:08:17.894 "superblock": true, 00:08:17.894 "num_base_bdevs": 2, 00:08:17.894 "num_base_bdevs_discovered": 2, 00:08:17.894 "num_base_bdevs_operational": 2, 00:08:17.894 "base_bdevs_list": [ 00:08:17.894 { 00:08:17.894 "name": "pt1", 00:08:17.894 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:17.894 "is_configured": true, 00:08:17.894 "data_offset": 2048, 00:08:17.894 "data_size": 63488 00:08:17.894 }, 00:08:17.894 { 00:08:17.894 "name": "pt2", 00:08:17.894 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:17.894 "is_configured": true, 00:08:17.894 "data_offset": 2048, 00:08:17.894 "data_size": 63488 00:08:17.894 } 00:08:17.894 ] 00:08:17.894 }' 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.894 04:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.463 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:18.463 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:18.463 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:18.463 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:18.463 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:18.463 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:18.463 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:18.463 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.463 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:18.463 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.463 [2024-11-21 04:06:18.275756] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:18.463 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.463 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:18.463 "name": "raid_bdev1", 00:08:18.463 "aliases": [ 00:08:18.463 "7440eea8-1cd3-49a1-ba6a-7b0a1ce889b3" 00:08:18.463 ], 00:08:18.463 "product_name": "Raid Volume", 00:08:18.463 "block_size": 512, 00:08:18.463 "num_blocks": 63488, 00:08:18.463 "uuid": "7440eea8-1cd3-49a1-ba6a-7b0a1ce889b3", 00:08:18.463 "assigned_rate_limits": { 00:08:18.463 "rw_ios_per_sec": 0, 00:08:18.463 "rw_mbytes_per_sec": 0, 00:08:18.463 "r_mbytes_per_sec": 0, 00:08:18.463 "w_mbytes_per_sec": 0 00:08:18.463 }, 00:08:18.463 "claimed": false, 00:08:18.463 "zoned": false, 00:08:18.463 "supported_io_types": { 00:08:18.463 "read": true, 00:08:18.463 "write": true, 00:08:18.463 "unmap": false, 00:08:18.463 "flush": false, 00:08:18.463 "reset": true, 00:08:18.463 "nvme_admin": false, 00:08:18.463 "nvme_io": false, 00:08:18.463 "nvme_io_md": false, 00:08:18.463 "write_zeroes": true, 00:08:18.463 "zcopy": false, 00:08:18.463 "get_zone_info": false, 00:08:18.463 "zone_management": false, 00:08:18.463 "zone_append": false, 00:08:18.463 "compare": false, 00:08:18.463 "compare_and_write": false, 00:08:18.463 "abort": false, 00:08:18.463 "seek_hole": false, 00:08:18.463 "seek_data": false, 00:08:18.463 "copy": false, 00:08:18.463 "nvme_iov_md": false 00:08:18.463 }, 00:08:18.463 "memory_domains": [ 00:08:18.463 { 00:08:18.463 "dma_device_id": "system", 00:08:18.463 "dma_device_type": 1 00:08:18.463 }, 00:08:18.463 { 00:08:18.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.463 "dma_device_type": 2 00:08:18.463 }, 00:08:18.463 { 00:08:18.463 "dma_device_id": "system", 00:08:18.463 "dma_device_type": 1 00:08:18.463 }, 00:08:18.463 { 00:08:18.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.463 "dma_device_type": 2 00:08:18.463 } 00:08:18.463 ], 00:08:18.463 "driver_specific": { 00:08:18.463 "raid": { 00:08:18.463 "uuid": "7440eea8-1cd3-49a1-ba6a-7b0a1ce889b3", 00:08:18.463 "strip_size_kb": 0, 00:08:18.463 "state": "online", 00:08:18.463 "raid_level": "raid1", 00:08:18.463 "superblock": true, 00:08:18.463 "num_base_bdevs": 2, 00:08:18.463 "num_base_bdevs_discovered": 2, 00:08:18.463 "num_base_bdevs_operational": 2, 00:08:18.463 "base_bdevs_list": [ 00:08:18.463 { 00:08:18.463 "name": "pt1", 00:08:18.463 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:18.464 "is_configured": true, 00:08:18.464 "data_offset": 2048, 00:08:18.464 "data_size": 63488 00:08:18.464 }, 00:08:18.464 { 00:08:18.464 "name": "pt2", 00:08:18.464 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:18.464 "is_configured": true, 00:08:18.464 "data_offset": 2048, 00:08:18.464 "data_size": 63488 00:08:18.464 } 00:08:18.464 ] 00:08:18.464 } 00:08:18.464 } 00:08:18.464 }' 00:08:18.464 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:18.464 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:18.464 pt2' 00:08:18.464 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.464 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:18.464 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.464 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:18.464 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.464 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.464 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.464 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:18.723 [2024-11-21 04:06:18.523230] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7440eea8-1cd3-49a1-ba6a-7b0a1ce889b3 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7440eea8-1cd3-49a1-ba6a-7b0a1ce889b3 ']' 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.723 [2024-11-21 04:06:18.570870] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:18.723 [2024-11-21 04:06:18.570901] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:18.723 [2024-11-21 04:06:18.570992] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:18.723 [2024-11-21 04:06:18.571067] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:18.723 [2024-11-21 04:06:18.571078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.723 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.724 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:18.724 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:18.724 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.724 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.724 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.983 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:18.983 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.984 [2024-11-21 04:06:18.714656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:18.984 [2024-11-21 04:06:18.716950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:18.984 [2024-11-21 04:06:18.717022] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:18.984 [2024-11-21 04:06:18.717072] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:18.984 [2024-11-21 04:06:18.717089] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:18.984 [2024-11-21 04:06:18.717099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:08:18.984 request: 00:08:18.984 { 00:08:18.984 "name": "raid_bdev1", 00:08:18.984 "raid_level": "raid1", 00:08:18.984 "base_bdevs": [ 00:08:18.984 "malloc1", 00:08:18.984 "malloc2" 00:08:18.984 ], 00:08:18.984 "superblock": false, 00:08:18.984 "method": "bdev_raid_create", 00:08:18.984 "req_id": 1 00:08:18.984 } 00:08:18.984 Got JSON-RPC error response 00:08:18.984 response: 00:08:18.984 { 00:08:18.984 "code": -17, 00:08:18.984 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:18.984 } 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.984 [2024-11-21 04:06:18.766535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:18.984 [2024-11-21 04:06:18.766596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.984 [2024-11-21 04:06:18.766617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:18.984 [2024-11-21 04:06:18.766625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.984 [2024-11-21 04:06:18.769196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.984 [2024-11-21 04:06:18.769301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:18.984 [2024-11-21 04:06:18.769416] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:18.984 [2024-11-21 04:06:18.769478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:18.984 pt1 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.984 "name": "raid_bdev1", 00:08:18.984 "uuid": "7440eea8-1cd3-49a1-ba6a-7b0a1ce889b3", 00:08:18.984 "strip_size_kb": 0, 00:08:18.984 "state": "configuring", 00:08:18.984 "raid_level": "raid1", 00:08:18.984 "superblock": true, 00:08:18.984 "num_base_bdevs": 2, 00:08:18.984 "num_base_bdevs_discovered": 1, 00:08:18.984 "num_base_bdevs_operational": 2, 00:08:18.984 "base_bdevs_list": [ 00:08:18.984 { 00:08:18.984 "name": "pt1", 00:08:18.984 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:18.984 "is_configured": true, 00:08:18.984 "data_offset": 2048, 00:08:18.984 "data_size": 63488 00:08:18.984 }, 00:08:18.984 { 00:08:18.984 "name": null, 00:08:18.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:18.984 "is_configured": false, 00:08:18.984 "data_offset": 2048, 00:08:18.984 "data_size": 63488 00:08:18.984 } 00:08:18.984 ] 00:08:18.984 }' 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.984 04:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.554 [2024-11-21 04:06:19.265734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:19.554 [2024-11-21 04:06:19.265817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.554 [2024-11-21 04:06:19.265845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:19.554 [2024-11-21 04:06:19.265856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.554 [2024-11-21 04:06:19.266432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.554 [2024-11-21 04:06:19.266466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:19.554 [2024-11-21 04:06:19.266568] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:19.554 [2024-11-21 04:06:19.266609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:19.554 [2024-11-21 04:06:19.266786] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:19.554 [2024-11-21 04:06:19.266798] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:19.554 [2024-11-21 04:06:19.267084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:19.554 [2024-11-21 04:06:19.267241] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:19.554 [2024-11-21 04:06:19.267259] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:19.554 [2024-11-21 04:06:19.267380] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.554 pt2 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.554 "name": "raid_bdev1", 00:08:19.554 "uuid": "7440eea8-1cd3-49a1-ba6a-7b0a1ce889b3", 00:08:19.554 "strip_size_kb": 0, 00:08:19.554 "state": "online", 00:08:19.554 "raid_level": "raid1", 00:08:19.554 "superblock": true, 00:08:19.554 "num_base_bdevs": 2, 00:08:19.554 "num_base_bdevs_discovered": 2, 00:08:19.554 "num_base_bdevs_operational": 2, 00:08:19.554 "base_bdevs_list": [ 00:08:19.554 { 00:08:19.554 "name": "pt1", 00:08:19.554 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.554 "is_configured": true, 00:08:19.554 "data_offset": 2048, 00:08:19.554 "data_size": 63488 00:08:19.554 }, 00:08:19.554 { 00:08:19.554 "name": "pt2", 00:08:19.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.554 "is_configured": true, 00:08:19.554 "data_offset": 2048, 00:08:19.554 "data_size": 63488 00:08:19.554 } 00:08:19.554 ] 00:08:19.554 }' 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.554 04:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.815 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:19.815 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:19.815 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:19.815 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:19.815 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:19.815 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:19.815 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:19.815 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:19.815 04:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.815 04:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.815 [2024-11-21 04:06:19.709249] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:19.815 04:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.815 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:19.815 "name": "raid_bdev1", 00:08:19.815 "aliases": [ 00:08:19.815 "7440eea8-1cd3-49a1-ba6a-7b0a1ce889b3" 00:08:19.815 ], 00:08:19.815 "product_name": "Raid Volume", 00:08:19.815 "block_size": 512, 00:08:19.815 "num_blocks": 63488, 00:08:19.815 "uuid": "7440eea8-1cd3-49a1-ba6a-7b0a1ce889b3", 00:08:19.815 "assigned_rate_limits": { 00:08:19.815 "rw_ios_per_sec": 0, 00:08:19.815 "rw_mbytes_per_sec": 0, 00:08:19.815 "r_mbytes_per_sec": 0, 00:08:19.815 "w_mbytes_per_sec": 0 00:08:19.815 }, 00:08:19.815 "claimed": false, 00:08:19.815 "zoned": false, 00:08:19.815 "supported_io_types": { 00:08:19.815 "read": true, 00:08:19.815 "write": true, 00:08:19.815 "unmap": false, 00:08:19.815 "flush": false, 00:08:19.815 "reset": true, 00:08:19.815 "nvme_admin": false, 00:08:19.815 "nvme_io": false, 00:08:19.815 "nvme_io_md": false, 00:08:19.815 "write_zeroes": true, 00:08:19.815 "zcopy": false, 00:08:19.815 "get_zone_info": false, 00:08:19.815 "zone_management": false, 00:08:19.815 "zone_append": false, 00:08:19.815 "compare": false, 00:08:19.815 "compare_and_write": false, 00:08:19.815 "abort": false, 00:08:19.815 "seek_hole": false, 00:08:19.815 "seek_data": false, 00:08:19.815 "copy": false, 00:08:19.815 "nvme_iov_md": false 00:08:19.815 }, 00:08:19.815 "memory_domains": [ 00:08:19.815 { 00:08:19.815 "dma_device_id": "system", 00:08:19.815 "dma_device_type": 1 00:08:19.815 }, 00:08:19.815 { 00:08:19.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.815 "dma_device_type": 2 00:08:19.815 }, 00:08:19.815 { 00:08:19.815 "dma_device_id": "system", 00:08:19.815 "dma_device_type": 1 00:08:19.815 }, 00:08:19.815 { 00:08:19.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.815 "dma_device_type": 2 00:08:19.815 } 00:08:19.815 ], 00:08:19.815 "driver_specific": { 00:08:19.815 "raid": { 00:08:19.815 "uuid": "7440eea8-1cd3-49a1-ba6a-7b0a1ce889b3", 00:08:19.815 "strip_size_kb": 0, 00:08:19.815 "state": "online", 00:08:19.815 "raid_level": "raid1", 00:08:19.815 "superblock": true, 00:08:19.815 "num_base_bdevs": 2, 00:08:19.815 "num_base_bdevs_discovered": 2, 00:08:19.815 "num_base_bdevs_operational": 2, 00:08:19.815 "base_bdevs_list": [ 00:08:19.815 { 00:08:19.815 "name": "pt1", 00:08:19.815 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.815 "is_configured": true, 00:08:19.815 "data_offset": 2048, 00:08:19.815 "data_size": 63488 00:08:19.815 }, 00:08:19.815 { 00:08:19.815 "name": "pt2", 00:08:19.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.815 "is_configured": true, 00:08:19.815 "data_offset": 2048, 00:08:19.815 "data_size": 63488 00:08:19.815 } 00:08:19.815 ] 00:08:19.815 } 00:08:19.815 } 00:08:19.815 }' 00:08:19.815 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:19.815 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:19.815 pt2' 00:08:19.815 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:20.075 [2024-11-21 04:06:19.908827] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7440eea8-1cd3-49a1-ba6a-7b0a1ce889b3 '!=' 7440eea8-1cd3-49a1-ba6a-7b0a1ce889b3 ']' 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.075 [2024-11-21 04:06:19.960505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.075 04:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.075 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.075 "name": "raid_bdev1", 00:08:20.075 "uuid": "7440eea8-1cd3-49a1-ba6a-7b0a1ce889b3", 00:08:20.075 "strip_size_kb": 0, 00:08:20.075 "state": "online", 00:08:20.075 "raid_level": "raid1", 00:08:20.075 "superblock": true, 00:08:20.075 "num_base_bdevs": 2, 00:08:20.076 "num_base_bdevs_discovered": 1, 00:08:20.076 "num_base_bdevs_operational": 1, 00:08:20.076 "base_bdevs_list": [ 00:08:20.076 { 00:08:20.076 "name": null, 00:08:20.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.076 "is_configured": false, 00:08:20.076 "data_offset": 0, 00:08:20.076 "data_size": 63488 00:08:20.076 }, 00:08:20.076 { 00:08:20.076 "name": "pt2", 00:08:20.076 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.076 "is_configured": true, 00:08:20.076 "data_offset": 2048, 00:08:20.076 "data_size": 63488 00:08:20.076 } 00:08:20.076 ] 00:08:20.076 }' 00:08:20.076 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.076 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.645 [2024-11-21 04:06:20.435743] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:20.645 [2024-11-21 04:06:20.435830] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:20.645 [2024-11-21 04:06:20.436016] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.645 [2024-11-21 04:06:20.436128] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:20.645 [2024-11-21 04:06:20.436203] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.645 [2024-11-21 04:06:20.495614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:20.645 [2024-11-21 04:06:20.495682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.645 [2024-11-21 04:06:20.495706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:20.645 [2024-11-21 04:06:20.495717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.645 [2024-11-21 04:06:20.498354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.645 [2024-11-21 04:06:20.498391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:20.645 [2024-11-21 04:06:20.498480] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:20.645 [2024-11-21 04:06:20.498517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:20.645 [2024-11-21 04:06:20.498604] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:20.645 [2024-11-21 04:06:20.498612] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:20.645 [2024-11-21 04:06:20.498898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:20.645 [2024-11-21 04:06:20.499051] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:20.645 [2024-11-21 04:06:20.499064] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:20.645 [2024-11-21 04:06:20.499199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.645 pt2 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.645 "name": "raid_bdev1", 00:08:20.645 "uuid": "7440eea8-1cd3-49a1-ba6a-7b0a1ce889b3", 00:08:20.645 "strip_size_kb": 0, 00:08:20.645 "state": "online", 00:08:20.645 "raid_level": "raid1", 00:08:20.645 "superblock": true, 00:08:20.645 "num_base_bdevs": 2, 00:08:20.645 "num_base_bdevs_discovered": 1, 00:08:20.645 "num_base_bdevs_operational": 1, 00:08:20.645 "base_bdevs_list": [ 00:08:20.645 { 00:08:20.645 "name": null, 00:08:20.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.645 "is_configured": false, 00:08:20.645 "data_offset": 2048, 00:08:20.645 "data_size": 63488 00:08:20.645 }, 00:08:20.645 { 00:08:20.645 "name": "pt2", 00:08:20.645 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.645 "is_configured": true, 00:08:20.645 "data_offset": 2048, 00:08:20.645 "data_size": 63488 00:08:20.645 } 00:08:20.645 ] 00:08:20.645 }' 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.645 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.215 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:21.215 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.215 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.215 [2024-11-21 04:06:20.926944] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:21.215 [2024-11-21 04:06:20.927043] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.215 [2024-11-21 04:06:20.927207] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.215 [2024-11-21 04:06:20.927332] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.215 [2024-11-21 04:06:20.927416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:21.215 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.215 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.215 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:21.215 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.215 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.215 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.215 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:21.215 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:21.215 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:21.215 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:21.215 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.215 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.215 [2024-11-21 04:06:20.986839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:21.215 [2024-11-21 04:06:20.986936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.215 [2024-11-21 04:06:20.986958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:08:21.215 [2024-11-21 04:06:20.986974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.215 [2024-11-21 04:06:20.989651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.215 [2024-11-21 04:06:20.989695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:21.215 [2024-11-21 04:06:20.989797] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:21.215 [2024-11-21 04:06:20.989853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:21.215 [2024-11-21 04:06:20.990026] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:21.215 [2024-11-21 04:06:20.990045] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:21.215 [2024-11-21 04:06:20.990065] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:08:21.215 [2024-11-21 04:06:20.990105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:21.215 [2024-11-21 04:06:20.990183] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:08:21.215 [2024-11-21 04:06:20.990194] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:21.215 [2024-11-21 04:06:20.990466] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:21.215 [2024-11-21 04:06:20.990599] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:08:21.215 [2024-11-21 04:06:20.990609] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:08:21.215 [2024-11-21 04:06:20.990782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.215 pt1 00:08:21.215 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.215 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:21.215 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:21.215 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:21.216 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.216 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:21.216 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:21.216 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:21.216 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.216 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.216 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.216 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.216 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.216 04:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:21.216 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.216 04:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.216 04:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.216 04:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.216 "name": "raid_bdev1", 00:08:21.216 "uuid": "7440eea8-1cd3-49a1-ba6a-7b0a1ce889b3", 00:08:21.216 "strip_size_kb": 0, 00:08:21.216 "state": "online", 00:08:21.216 "raid_level": "raid1", 00:08:21.216 "superblock": true, 00:08:21.216 "num_base_bdevs": 2, 00:08:21.216 "num_base_bdevs_discovered": 1, 00:08:21.216 "num_base_bdevs_operational": 1, 00:08:21.216 "base_bdevs_list": [ 00:08:21.216 { 00:08:21.216 "name": null, 00:08:21.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.216 "is_configured": false, 00:08:21.216 "data_offset": 2048, 00:08:21.216 "data_size": 63488 00:08:21.216 }, 00:08:21.216 { 00:08:21.216 "name": "pt2", 00:08:21.216 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:21.216 "is_configured": true, 00:08:21.216 "data_offset": 2048, 00:08:21.216 "data_size": 63488 00:08:21.216 } 00:08:21.216 ] 00:08:21.216 }' 00:08:21.216 04:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.216 04:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.475 04:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:21.475 04:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:21.475 04:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.475 04:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.735 04:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.735 04:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:21.735 04:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:21.735 04:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:21.735 04:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.735 04:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.735 [2024-11-21 04:06:21.498238] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.735 04:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.735 04:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7440eea8-1cd3-49a1-ba6a-7b0a1ce889b3 '!=' 7440eea8-1cd3-49a1-ba6a-7b0a1ce889b3 ']' 00:08:21.735 04:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74448 00:08:21.735 04:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74448 ']' 00:08:21.735 04:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74448 00:08:21.735 04:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:21.735 04:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.735 04:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74448 00:08:21.735 04:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:21.735 04:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:21.735 04:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74448' 00:08:21.735 killing process with pid 74448 00:08:21.735 04:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74448 00:08:21.735 [2024-11-21 04:06:21.580563] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:21.735 [2024-11-21 04:06:21.580678] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.735 04:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74448 00:08:21.735 [2024-11-21 04:06:21.580743] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.735 [2024-11-21 04:06:21.580754] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:08:21.735 [2024-11-21 04:06:21.624914] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.996 04:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:21.996 00:08:21.996 real 0m5.143s 00:08:21.996 user 0m8.236s 00:08:21.996 sys 0m1.165s 00:08:21.996 04:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.996 04:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.996 ************************************ 00:08:21.996 END TEST raid_superblock_test 00:08:21.996 ************************************ 00:08:22.255 04:06:22 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:22.255 04:06:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:22.255 04:06:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.255 04:06:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:22.255 ************************************ 00:08:22.255 START TEST raid_read_error_test 00:08:22.255 ************************************ 00:08:22.255 04:06:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:22.255 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:22.255 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:22.255 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:22.255 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:22.255 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:22.255 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:22.255 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:22.255 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:22.255 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:22.255 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:22.256 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:22.256 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:22.256 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:22.256 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:22.256 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:22.256 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:22.256 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:22.256 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:22.256 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:22.256 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:22.256 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:22.256 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fIzaoK9eUx 00:08:22.256 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74767 00:08:22.256 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:22.256 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74767 00:08:22.256 04:06:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74767 ']' 00:08:22.256 04:06:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.256 04:06:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.256 04:06:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.256 04:06:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.256 04:06:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.256 [2024-11-21 04:06:22.131021] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:22.256 [2024-11-21 04:06:22.131150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74767 ] 00:08:22.515 [2024-11-21 04:06:22.288890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.515 [2024-11-21 04:06:22.329645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.515 [2024-11-21 04:06:22.405666] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.515 [2024-11-21 04:06:22.405707] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.109 04:06:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.109 04:06:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:23.109 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:23.109 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:23.109 04:06:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.109 04:06:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.109 BaseBdev1_malloc 00:08:23.109 04:06:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.109 04:06:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:23.109 04:06:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.109 04:06:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.109 true 00:08:23.109 04:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.109 04:06:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:23.109 04:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.109 04:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.109 [2024-11-21 04:06:23.016403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:23.109 [2024-11-21 04:06:23.016474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.109 [2024-11-21 04:06:23.016507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:23.109 [2024-11-21 04:06:23.016518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.109 [2024-11-21 04:06:23.019248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.109 [2024-11-21 04:06:23.019287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:23.109 BaseBdev1 00:08:23.109 04:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.109 04:06:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:23.109 04:06:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:23.109 04:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.109 04:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.109 BaseBdev2_malloc 00:08:23.109 04:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.109 04:06:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:23.109 04:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.109 04:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.109 true 00:08:23.109 04:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.109 04:06:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:23.109 04:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.109 04:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.109 [2024-11-21 04:06:23.063728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:23.109 [2024-11-21 04:06:23.063788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.109 [2024-11-21 04:06:23.063812] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:23.109 [2024-11-21 04:06:23.063831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.109 [2024-11-21 04:06:23.066415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.109 [2024-11-21 04:06:23.066455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:23.109 BaseBdev2 00:08:23.109 04:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.109 04:06:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:23.109 04:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.109 04:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.109 [2024-11-21 04:06:23.075756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.109 [2024-11-21 04:06:23.077996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:23.109 [2024-11-21 04:06:23.078203] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:23.109 [2024-11-21 04:06:23.078231] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:23.109 [2024-11-21 04:06:23.078539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:23.109 [2024-11-21 04:06:23.078784] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:23.109 [2024-11-21 04:06:23.078809] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:23.109 [2024-11-21 04:06:23.078965] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.109 04:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.370 04:06:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:23.370 04:06:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.370 04:06:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.370 04:06:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:23.370 04:06:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:23.370 04:06:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:23.370 04:06:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.370 04:06:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.370 04:06:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.370 04:06:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.370 04:06:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.370 04:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.370 04:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.370 04:06:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.370 04:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.370 04:06:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.370 "name": "raid_bdev1", 00:08:23.370 "uuid": "39c0a647-8608-4f0d-9fb8-0b8a8e1059ab", 00:08:23.370 "strip_size_kb": 0, 00:08:23.370 "state": "online", 00:08:23.370 "raid_level": "raid1", 00:08:23.370 "superblock": true, 00:08:23.370 "num_base_bdevs": 2, 00:08:23.370 "num_base_bdevs_discovered": 2, 00:08:23.370 "num_base_bdevs_operational": 2, 00:08:23.370 "base_bdevs_list": [ 00:08:23.370 { 00:08:23.370 "name": "BaseBdev1", 00:08:23.370 "uuid": "8aa3b21f-80a7-537e-8fcc-0a3d71bd0e32", 00:08:23.370 "is_configured": true, 00:08:23.370 "data_offset": 2048, 00:08:23.370 "data_size": 63488 00:08:23.370 }, 00:08:23.370 { 00:08:23.370 "name": "BaseBdev2", 00:08:23.370 "uuid": "2eabaf40-0986-5ff3-96dd-e1730a650c56", 00:08:23.370 "is_configured": true, 00:08:23.370 "data_offset": 2048, 00:08:23.370 "data_size": 63488 00:08:23.370 } 00:08:23.370 ] 00:08:23.370 }' 00:08:23.370 04:06:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.370 04:06:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.631 04:06:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:23.631 04:06:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:23.891 [2024-11-21 04:06:23.651269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:08:24.832 04:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:24.832 04:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.832 04:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.832 04:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.832 04:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:24.832 04:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:24.832 04:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:24.832 04:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:24.832 04:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:24.832 04:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:24.832 04:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.832 04:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:24.832 04:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:24.832 04:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:24.832 04:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.832 04:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.832 04:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.832 04:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.832 04:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.832 04:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.832 04:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.832 04:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.832 04:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.832 04:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.832 "name": "raid_bdev1", 00:08:24.832 "uuid": "39c0a647-8608-4f0d-9fb8-0b8a8e1059ab", 00:08:24.832 "strip_size_kb": 0, 00:08:24.832 "state": "online", 00:08:24.832 "raid_level": "raid1", 00:08:24.832 "superblock": true, 00:08:24.832 "num_base_bdevs": 2, 00:08:24.832 "num_base_bdevs_discovered": 2, 00:08:24.832 "num_base_bdevs_operational": 2, 00:08:24.832 "base_bdevs_list": [ 00:08:24.832 { 00:08:24.832 "name": "BaseBdev1", 00:08:24.832 "uuid": "8aa3b21f-80a7-537e-8fcc-0a3d71bd0e32", 00:08:24.832 "is_configured": true, 00:08:24.832 "data_offset": 2048, 00:08:24.832 "data_size": 63488 00:08:24.832 }, 00:08:24.832 { 00:08:24.832 "name": "BaseBdev2", 00:08:24.832 "uuid": "2eabaf40-0986-5ff3-96dd-e1730a650c56", 00:08:24.832 "is_configured": true, 00:08:24.832 "data_offset": 2048, 00:08:24.832 "data_size": 63488 00:08:24.832 } 00:08:24.832 ] 00:08:24.832 }' 00:08:24.832 04:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.832 04:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.092 04:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:25.092 04:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.092 04:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.092 [2024-11-21 04:06:25.056785] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:25.092 [2024-11-21 04:06:25.056898] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:25.092 [2024-11-21 04:06:25.059536] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.092 [2024-11-21 04:06:25.059650] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.092 [2024-11-21 04:06:25.059802] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:25.093 [2024-11-21 04:06:25.059875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:25.093 { 00:08:25.093 "results": [ 00:08:25.093 { 00:08:25.093 "job": "raid_bdev1", 00:08:25.093 "core_mask": "0x1", 00:08:25.093 "workload": "randrw", 00:08:25.093 "percentage": 50, 00:08:25.093 "status": "finished", 00:08:25.093 "queue_depth": 1, 00:08:25.093 "io_size": 131072, 00:08:25.093 "runtime": 1.406188, 00:08:25.093 "iops": 15729.049031850649, 00:08:25.093 "mibps": 1966.1311289813311, 00:08:25.093 "io_failed": 0, 00:08:25.093 "io_timeout": 0, 00:08:25.093 "avg_latency_us": 61.12710365325165, 00:08:25.093 "min_latency_us": 21.799126637554586, 00:08:25.093 "max_latency_us": 1445.2262008733624 00:08:25.093 } 00:08:25.093 ], 00:08:25.093 "core_count": 1 00:08:25.093 } 00:08:25.093 04:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.093 04:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74767 00:08:25.093 04:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74767 ']' 00:08:25.093 04:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74767 00:08:25.352 04:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:25.352 04:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.352 04:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74767 00:08:25.352 04:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:25.352 04:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:25.353 04:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74767' 00:08:25.353 killing process with pid 74767 00:08:25.353 04:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74767 00:08:25.353 [2024-11-21 04:06:25.109239] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:25.353 04:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74767 00:08:25.353 [2024-11-21 04:06:25.139287] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:25.613 04:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fIzaoK9eUx 00:08:25.613 04:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:25.613 04:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:25.613 04:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:25.613 04:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:25.613 04:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:25.613 04:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:25.613 04:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:25.613 00:08:25.613 real 0m3.453s 00:08:25.613 user 0m4.327s 00:08:25.613 sys 0m0.608s 00:08:25.613 04:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.613 04:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.613 ************************************ 00:08:25.613 END TEST raid_read_error_test 00:08:25.613 ************************************ 00:08:25.613 04:06:25 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:25.613 04:06:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:25.613 04:06:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.613 04:06:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:25.613 ************************************ 00:08:25.613 START TEST raid_write_error_test 00:08:25.613 ************************************ 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.eofMHbqIzC 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74902 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74902 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 74902 ']' 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.613 04:06:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.872 [2024-11-21 04:06:25.660308] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:25.872 [2024-11-21 04:06:25.660434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74902 ] 00:08:25.872 [2024-11-21 04:06:25.817929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.132 [2024-11-21 04:06:25.859944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.132 [2024-11-21 04:06:25.939334] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.132 [2024-11-21 04:06:25.939403] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.703 BaseBdev1_malloc 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.703 true 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.703 [2024-11-21 04:06:26.531520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:26.703 [2024-11-21 04:06:26.531584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.703 [2024-11-21 04:06:26.531607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:26.703 [2024-11-21 04:06:26.531616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.703 [2024-11-21 04:06:26.534112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.703 [2024-11-21 04:06:26.534211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:26.703 BaseBdev1 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.703 BaseBdev2_malloc 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.703 true 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.703 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.703 [2024-11-21 04:06:26.578053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:26.703 [2024-11-21 04:06:26.578158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.703 [2024-11-21 04:06:26.578211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:26.703 [2024-11-21 04:06:26.578287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.703 [2024-11-21 04:06:26.580652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.703 [2024-11-21 04:06:26.580730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:26.704 BaseBdev2 00:08:26.704 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.704 04:06:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:26.704 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.704 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.704 [2024-11-21 04:06:26.590112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:26.704 [2024-11-21 04:06:26.592269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:26.704 [2024-11-21 04:06:26.592456] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:26.704 [2024-11-21 04:06:26.592470] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:26.704 [2024-11-21 04:06:26.592724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:26.704 [2024-11-21 04:06:26.592868] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:26.704 [2024-11-21 04:06:26.592881] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:26.704 [2024-11-21 04:06:26.593020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.704 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.704 04:06:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:26.704 04:06:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:26.704 04:06:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.704 04:06:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:26.704 04:06:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:26.704 04:06:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:26.704 04:06:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.704 04:06:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.704 04:06:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.704 04:06:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.704 04:06:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.704 04:06:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.704 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.704 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.704 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.704 04:06:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.704 "name": "raid_bdev1", 00:08:26.704 "uuid": "dbe463c9-1537-43a5-9a33-8e9018cd2c35", 00:08:26.704 "strip_size_kb": 0, 00:08:26.704 "state": "online", 00:08:26.704 "raid_level": "raid1", 00:08:26.704 "superblock": true, 00:08:26.704 "num_base_bdevs": 2, 00:08:26.704 "num_base_bdevs_discovered": 2, 00:08:26.704 "num_base_bdevs_operational": 2, 00:08:26.704 "base_bdevs_list": [ 00:08:26.704 { 00:08:26.704 "name": "BaseBdev1", 00:08:26.704 "uuid": "5a6887d7-2054-5eb3-877b-c0c8df97d341", 00:08:26.704 "is_configured": true, 00:08:26.704 "data_offset": 2048, 00:08:26.704 "data_size": 63488 00:08:26.704 }, 00:08:26.704 { 00:08:26.704 "name": "BaseBdev2", 00:08:26.704 "uuid": "70bda26d-a6ef-5af4-b8b2-d38fa8923710", 00:08:26.704 "is_configured": true, 00:08:26.704 "data_offset": 2048, 00:08:26.704 "data_size": 63488 00:08:26.704 } 00:08:26.704 ] 00:08:26.704 }' 00:08:26.704 04:06:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.704 04:06:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.275 04:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:27.275 04:06:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:27.275 [2024-11-21 04:06:27.109712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:08:28.215 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:28.215 04:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.215 04:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.215 [2024-11-21 04:06:28.032446] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:28.215 [2024-11-21 04:06:28.032505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:28.215 [2024-11-21 04:06:28.032764] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002a10 00:08:28.215 04:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.215 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:28.215 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:28.215 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:28.215 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:28.215 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:28.216 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:28.216 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.216 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:28.216 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:28.216 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:28.216 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.216 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.216 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.216 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.216 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.216 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.216 04:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.216 04:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.216 04:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.216 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.216 "name": "raid_bdev1", 00:08:28.216 "uuid": "dbe463c9-1537-43a5-9a33-8e9018cd2c35", 00:08:28.216 "strip_size_kb": 0, 00:08:28.216 "state": "online", 00:08:28.216 "raid_level": "raid1", 00:08:28.216 "superblock": true, 00:08:28.216 "num_base_bdevs": 2, 00:08:28.216 "num_base_bdevs_discovered": 1, 00:08:28.216 "num_base_bdevs_operational": 1, 00:08:28.216 "base_bdevs_list": [ 00:08:28.216 { 00:08:28.216 "name": null, 00:08:28.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.216 "is_configured": false, 00:08:28.216 "data_offset": 0, 00:08:28.216 "data_size": 63488 00:08:28.216 }, 00:08:28.216 { 00:08:28.216 "name": "BaseBdev2", 00:08:28.216 "uuid": "70bda26d-a6ef-5af4-b8b2-d38fa8923710", 00:08:28.216 "is_configured": true, 00:08:28.216 "data_offset": 2048, 00:08:28.216 "data_size": 63488 00:08:28.216 } 00:08:28.216 ] 00:08:28.216 }' 00:08:28.216 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.216 04:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.784 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:28.784 04:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.784 04:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.784 [2024-11-21 04:06:28.489636] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:28.784 [2024-11-21 04:06:28.489752] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:28.784 [2024-11-21 04:06:28.492317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.784 [2024-11-21 04:06:28.492441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.784 [2024-11-21 04:06:28.492554] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.784 [2024-11-21 04:06:28.492637] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:28.784 { 00:08:28.784 "results": [ 00:08:28.784 { 00:08:28.784 "job": "raid_bdev1", 00:08:28.784 "core_mask": "0x1", 00:08:28.784 "workload": "randrw", 00:08:28.784 "percentage": 50, 00:08:28.784 "status": "finished", 00:08:28.784 "queue_depth": 1, 00:08:28.784 "io_size": 131072, 00:08:28.784 "runtime": 1.380456, 00:08:28.784 "iops": 19291.451520367184, 00:08:28.784 "mibps": 2411.431440045898, 00:08:28.784 "io_failed": 0, 00:08:28.784 "io_timeout": 0, 00:08:28.784 "avg_latency_us": 49.30177973301299, 00:08:28.784 "min_latency_us": 21.240174672489083, 00:08:28.784 "max_latency_us": 1387.989519650655 00:08:28.784 } 00:08:28.784 ], 00:08:28.784 "core_count": 1 00:08:28.784 } 00:08:28.784 04:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.784 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74902 00:08:28.784 04:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 74902 ']' 00:08:28.784 04:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 74902 00:08:28.784 04:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:28.785 04:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.785 04:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74902 00:08:28.785 04:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:28.785 04:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:28.785 killing process with pid 74902 00:08:28.785 04:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74902' 00:08:28.785 04:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 74902 00:08:28.785 [2024-11-21 04:06:28.541134] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:28.785 04:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 74902 00:08:28.785 [2024-11-21 04:06:28.571180] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:29.045 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.eofMHbqIzC 00:08:29.046 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:29.046 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:29.046 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:29.046 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:29.046 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:29.046 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:29.046 04:06:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:29.046 00:08:29.046 real 0m3.350s 00:08:29.046 user 0m4.148s 00:08:29.046 sys 0m0.601s 00:08:29.046 04:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.046 ************************************ 00:08:29.046 END TEST raid_write_error_test 00:08:29.046 ************************************ 00:08:29.046 04:06:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.046 04:06:28 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:29.046 04:06:28 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:29.046 04:06:28 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:29.046 04:06:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:29.046 04:06:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.046 04:06:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:29.046 ************************************ 00:08:29.046 START TEST raid_state_function_test 00:08:29.046 ************************************ 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75029 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75029' 00:08:29.046 Process raid pid: 75029 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75029 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 75029 ']' 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.046 04:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.306 [2024-11-21 04:06:29.071118] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:29.307 [2024-11-21 04:06:29.071354] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.307 [2024-11-21 04:06:29.227688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.307 [2024-11-21 04:06:29.266571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.567 [2024-11-21 04:06:29.343013] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.567 [2024-11-21 04:06:29.343121] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.138 04:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.138 04:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:30.138 04:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:30.138 04:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.138 04:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.138 [2024-11-21 04:06:29.910334] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:30.138 [2024-11-21 04:06:29.910470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:30.138 [2024-11-21 04:06:29.910512] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:30.138 [2024-11-21 04:06:29.910565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:30.138 [2024-11-21 04:06:29.910597] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:30.138 [2024-11-21 04:06:29.910650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:30.138 04:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.138 04:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:30.138 04:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.138 04:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.138 04:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.138 04:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.138 04:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.138 04:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.138 04:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.138 04:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.138 04:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.138 04:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.138 04:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.138 04:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.138 04:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.138 04:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.138 04:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.138 "name": "Existed_Raid", 00:08:30.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.138 "strip_size_kb": 64, 00:08:30.138 "state": "configuring", 00:08:30.138 "raid_level": "raid0", 00:08:30.138 "superblock": false, 00:08:30.138 "num_base_bdevs": 3, 00:08:30.138 "num_base_bdevs_discovered": 0, 00:08:30.138 "num_base_bdevs_operational": 3, 00:08:30.138 "base_bdevs_list": [ 00:08:30.138 { 00:08:30.138 "name": "BaseBdev1", 00:08:30.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.138 "is_configured": false, 00:08:30.138 "data_offset": 0, 00:08:30.138 "data_size": 0 00:08:30.138 }, 00:08:30.138 { 00:08:30.138 "name": "BaseBdev2", 00:08:30.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.138 "is_configured": false, 00:08:30.138 "data_offset": 0, 00:08:30.138 "data_size": 0 00:08:30.138 }, 00:08:30.138 { 00:08:30.138 "name": "BaseBdev3", 00:08:30.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.138 "is_configured": false, 00:08:30.138 "data_offset": 0, 00:08:30.138 "data_size": 0 00:08:30.138 } 00:08:30.138 ] 00:08:30.138 }' 00:08:30.138 04:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.138 04:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.708 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:30.708 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.708 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.708 [2024-11-21 04:06:30.381529] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:30.708 [2024-11-21 04:06:30.381631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:30.708 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.708 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:30.708 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.708 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.708 [2024-11-21 04:06:30.393492] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:30.708 [2024-11-21 04:06:30.393590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:30.708 [2024-11-21 04:06:30.393640] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:30.708 [2024-11-21 04:06:30.393686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:30.708 [2024-11-21 04:06:30.393717] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:30.708 [2024-11-21 04:06:30.393760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:30.708 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.708 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:30.708 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.708 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.708 [2024-11-21 04:06:30.420725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:30.708 BaseBdev1 00:08:30.708 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.708 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:30.708 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:30.708 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:30.708 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:30.708 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:30.708 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:30.708 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:30.708 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.708 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.708 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.708 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:30.708 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.708 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.709 [ 00:08:30.709 { 00:08:30.709 "name": "BaseBdev1", 00:08:30.709 "aliases": [ 00:08:30.709 "ce4c71a6-df13-4b12-a714-2d8841d0f3c3" 00:08:30.709 ], 00:08:30.709 "product_name": "Malloc disk", 00:08:30.709 "block_size": 512, 00:08:30.709 "num_blocks": 65536, 00:08:30.709 "uuid": "ce4c71a6-df13-4b12-a714-2d8841d0f3c3", 00:08:30.709 "assigned_rate_limits": { 00:08:30.709 "rw_ios_per_sec": 0, 00:08:30.709 "rw_mbytes_per_sec": 0, 00:08:30.709 "r_mbytes_per_sec": 0, 00:08:30.709 "w_mbytes_per_sec": 0 00:08:30.709 }, 00:08:30.709 "claimed": true, 00:08:30.709 "claim_type": "exclusive_write", 00:08:30.709 "zoned": false, 00:08:30.709 "supported_io_types": { 00:08:30.709 "read": true, 00:08:30.709 "write": true, 00:08:30.709 "unmap": true, 00:08:30.709 "flush": true, 00:08:30.709 "reset": true, 00:08:30.709 "nvme_admin": false, 00:08:30.709 "nvme_io": false, 00:08:30.709 "nvme_io_md": false, 00:08:30.709 "write_zeroes": true, 00:08:30.709 "zcopy": true, 00:08:30.709 "get_zone_info": false, 00:08:30.709 "zone_management": false, 00:08:30.709 "zone_append": false, 00:08:30.709 "compare": false, 00:08:30.709 "compare_and_write": false, 00:08:30.709 "abort": true, 00:08:30.709 "seek_hole": false, 00:08:30.709 "seek_data": false, 00:08:30.709 "copy": true, 00:08:30.709 "nvme_iov_md": false 00:08:30.709 }, 00:08:30.709 "memory_domains": [ 00:08:30.709 { 00:08:30.709 "dma_device_id": "system", 00:08:30.709 "dma_device_type": 1 00:08:30.709 }, 00:08:30.709 { 00:08:30.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.709 "dma_device_type": 2 00:08:30.709 } 00:08:30.709 ], 00:08:30.709 "driver_specific": {} 00:08:30.709 } 00:08:30.709 ] 00:08:30.709 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.709 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:30.709 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:30.709 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.709 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.709 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.709 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.709 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.709 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.709 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.709 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.709 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.709 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.709 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.709 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.709 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.709 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.709 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.709 "name": "Existed_Raid", 00:08:30.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.709 "strip_size_kb": 64, 00:08:30.709 "state": "configuring", 00:08:30.709 "raid_level": "raid0", 00:08:30.709 "superblock": false, 00:08:30.709 "num_base_bdevs": 3, 00:08:30.709 "num_base_bdevs_discovered": 1, 00:08:30.709 "num_base_bdevs_operational": 3, 00:08:30.709 "base_bdevs_list": [ 00:08:30.709 { 00:08:30.709 "name": "BaseBdev1", 00:08:30.709 "uuid": "ce4c71a6-df13-4b12-a714-2d8841d0f3c3", 00:08:30.709 "is_configured": true, 00:08:30.709 "data_offset": 0, 00:08:30.709 "data_size": 65536 00:08:30.709 }, 00:08:30.709 { 00:08:30.709 "name": "BaseBdev2", 00:08:30.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.709 "is_configured": false, 00:08:30.709 "data_offset": 0, 00:08:30.709 "data_size": 0 00:08:30.709 }, 00:08:30.709 { 00:08:30.709 "name": "BaseBdev3", 00:08:30.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.709 "is_configured": false, 00:08:30.709 "data_offset": 0, 00:08:30.709 "data_size": 0 00:08:30.709 } 00:08:30.709 ] 00:08:30.709 }' 00:08:30.709 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.709 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.969 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:30.969 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.969 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.969 [2024-11-21 04:06:30.876100] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:30.969 [2024-11-21 04:06:30.876279] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:30.969 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.969 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:30.969 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.969 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.969 [2024-11-21 04:06:30.884117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:30.970 [2024-11-21 04:06:30.886421] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:30.970 [2024-11-21 04:06:30.886514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:30.970 [2024-11-21 04:06:30.886566] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:30.970 [2024-11-21 04:06:30.886612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:30.970 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.970 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:30.970 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:30.970 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:30.970 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.970 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.970 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.970 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.970 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.970 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.970 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.970 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.970 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.970 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.970 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.970 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.970 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.970 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.970 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.970 "name": "Existed_Raid", 00:08:30.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.970 "strip_size_kb": 64, 00:08:30.970 "state": "configuring", 00:08:30.970 "raid_level": "raid0", 00:08:30.970 "superblock": false, 00:08:30.970 "num_base_bdevs": 3, 00:08:30.970 "num_base_bdevs_discovered": 1, 00:08:30.970 "num_base_bdevs_operational": 3, 00:08:30.970 "base_bdevs_list": [ 00:08:30.970 { 00:08:30.970 "name": "BaseBdev1", 00:08:30.970 "uuid": "ce4c71a6-df13-4b12-a714-2d8841d0f3c3", 00:08:30.970 "is_configured": true, 00:08:30.970 "data_offset": 0, 00:08:30.970 "data_size": 65536 00:08:30.970 }, 00:08:30.970 { 00:08:30.970 "name": "BaseBdev2", 00:08:30.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.970 "is_configured": false, 00:08:30.970 "data_offset": 0, 00:08:30.970 "data_size": 0 00:08:30.970 }, 00:08:30.970 { 00:08:30.970 "name": "BaseBdev3", 00:08:30.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.970 "is_configured": false, 00:08:30.970 "data_offset": 0, 00:08:30.970 "data_size": 0 00:08:30.970 } 00:08:30.970 ] 00:08:30.970 }' 00:08:30.970 04:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.970 04:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.539 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:31.539 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.540 [2024-11-21 04:06:31.384084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:31.540 BaseBdev2 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.540 [ 00:08:31.540 { 00:08:31.540 "name": "BaseBdev2", 00:08:31.540 "aliases": [ 00:08:31.540 "8c9e906b-4094-420f-adc6-0f0c4cf59be8" 00:08:31.540 ], 00:08:31.540 "product_name": "Malloc disk", 00:08:31.540 "block_size": 512, 00:08:31.540 "num_blocks": 65536, 00:08:31.540 "uuid": "8c9e906b-4094-420f-adc6-0f0c4cf59be8", 00:08:31.540 "assigned_rate_limits": { 00:08:31.540 "rw_ios_per_sec": 0, 00:08:31.540 "rw_mbytes_per_sec": 0, 00:08:31.540 "r_mbytes_per_sec": 0, 00:08:31.540 "w_mbytes_per_sec": 0 00:08:31.540 }, 00:08:31.540 "claimed": true, 00:08:31.540 "claim_type": "exclusive_write", 00:08:31.540 "zoned": false, 00:08:31.540 "supported_io_types": { 00:08:31.540 "read": true, 00:08:31.540 "write": true, 00:08:31.540 "unmap": true, 00:08:31.540 "flush": true, 00:08:31.540 "reset": true, 00:08:31.540 "nvme_admin": false, 00:08:31.540 "nvme_io": false, 00:08:31.540 "nvme_io_md": false, 00:08:31.540 "write_zeroes": true, 00:08:31.540 "zcopy": true, 00:08:31.540 "get_zone_info": false, 00:08:31.540 "zone_management": false, 00:08:31.540 "zone_append": false, 00:08:31.540 "compare": false, 00:08:31.540 "compare_and_write": false, 00:08:31.540 "abort": true, 00:08:31.540 "seek_hole": false, 00:08:31.540 "seek_data": false, 00:08:31.540 "copy": true, 00:08:31.540 "nvme_iov_md": false 00:08:31.540 }, 00:08:31.540 "memory_domains": [ 00:08:31.540 { 00:08:31.540 "dma_device_id": "system", 00:08:31.540 "dma_device_type": 1 00:08:31.540 }, 00:08:31.540 { 00:08:31.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.540 "dma_device_type": 2 00:08:31.540 } 00:08:31.540 ], 00:08:31.540 "driver_specific": {} 00:08:31.540 } 00:08:31.540 ] 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.540 "name": "Existed_Raid", 00:08:31.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.540 "strip_size_kb": 64, 00:08:31.540 "state": "configuring", 00:08:31.540 "raid_level": "raid0", 00:08:31.540 "superblock": false, 00:08:31.540 "num_base_bdevs": 3, 00:08:31.540 "num_base_bdevs_discovered": 2, 00:08:31.540 "num_base_bdevs_operational": 3, 00:08:31.540 "base_bdevs_list": [ 00:08:31.540 { 00:08:31.540 "name": "BaseBdev1", 00:08:31.540 "uuid": "ce4c71a6-df13-4b12-a714-2d8841d0f3c3", 00:08:31.540 "is_configured": true, 00:08:31.540 "data_offset": 0, 00:08:31.540 "data_size": 65536 00:08:31.540 }, 00:08:31.540 { 00:08:31.540 "name": "BaseBdev2", 00:08:31.540 "uuid": "8c9e906b-4094-420f-adc6-0f0c4cf59be8", 00:08:31.540 "is_configured": true, 00:08:31.540 "data_offset": 0, 00:08:31.540 "data_size": 65536 00:08:31.540 }, 00:08:31.540 { 00:08:31.540 "name": "BaseBdev3", 00:08:31.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.540 "is_configured": false, 00:08:31.540 "data_offset": 0, 00:08:31.540 "data_size": 0 00:08:31.540 } 00:08:31.540 ] 00:08:31.540 }' 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.540 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.110 [2024-11-21 04:06:31.922947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:32.110 [2024-11-21 04:06:31.923291] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:32.110 [2024-11-21 04:06:31.923532] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:32.110 [2024-11-21 04:06:31.924738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:32.110 [2024-11-21 04:06:31.925220] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:32.110 [2024-11-21 04:06:31.925322] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:32.110 [2024-11-21 04:06:31.925985] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.110 BaseBdev3 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.110 [ 00:08:32.110 { 00:08:32.110 "name": "BaseBdev3", 00:08:32.110 "aliases": [ 00:08:32.110 "242f54ae-43ff-44f1-b6e1-deef8a5a2660" 00:08:32.110 ], 00:08:32.110 "product_name": "Malloc disk", 00:08:32.110 "block_size": 512, 00:08:32.110 "num_blocks": 65536, 00:08:32.110 "uuid": "242f54ae-43ff-44f1-b6e1-deef8a5a2660", 00:08:32.110 "assigned_rate_limits": { 00:08:32.110 "rw_ios_per_sec": 0, 00:08:32.110 "rw_mbytes_per_sec": 0, 00:08:32.110 "r_mbytes_per_sec": 0, 00:08:32.110 "w_mbytes_per_sec": 0 00:08:32.110 }, 00:08:32.110 "claimed": true, 00:08:32.110 "claim_type": "exclusive_write", 00:08:32.110 "zoned": false, 00:08:32.110 "supported_io_types": { 00:08:32.110 "read": true, 00:08:32.110 "write": true, 00:08:32.110 "unmap": true, 00:08:32.110 "flush": true, 00:08:32.110 "reset": true, 00:08:32.110 "nvme_admin": false, 00:08:32.110 "nvme_io": false, 00:08:32.110 "nvme_io_md": false, 00:08:32.110 "write_zeroes": true, 00:08:32.110 "zcopy": true, 00:08:32.110 "get_zone_info": false, 00:08:32.110 "zone_management": false, 00:08:32.110 "zone_append": false, 00:08:32.110 "compare": false, 00:08:32.110 "compare_and_write": false, 00:08:32.110 "abort": true, 00:08:32.110 "seek_hole": false, 00:08:32.110 "seek_data": false, 00:08:32.110 "copy": true, 00:08:32.110 "nvme_iov_md": false 00:08:32.110 }, 00:08:32.110 "memory_domains": [ 00:08:32.110 { 00:08:32.110 "dma_device_id": "system", 00:08:32.110 "dma_device_type": 1 00:08:32.110 }, 00:08:32.110 { 00:08:32.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.110 "dma_device_type": 2 00:08:32.110 } 00:08:32.110 ], 00:08:32.110 "driver_specific": {} 00:08:32.110 } 00:08:32.110 ] 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.110 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.111 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.111 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.111 04:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.111 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.111 04:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.111 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.111 "name": "Existed_Raid", 00:08:32.111 "uuid": "8d83a28b-c0bd-43cc-93ce-45a9a6074371", 00:08:32.111 "strip_size_kb": 64, 00:08:32.111 "state": "online", 00:08:32.111 "raid_level": "raid0", 00:08:32.111 "superblock": false, 00:08:32.111 "num_base_bdevs": 3, 00:08:32.111 "num_base_bdevs_discovered": 3, 00:08:32.111 "num_base_bdevs_operational": 3, 00:08:32.111 "base_bdevs_list": [ 00:08:32.111 { 00:08:32.111 "name": "BaseBdev1", 00:08:32.111 "uuid": "ce4c71a6-df13-4b12-a714-2d8841d0f3c3", 00:08:32.111 "is_configured": true, 00:08:32.111 "data_offset": 0, 00:08:32.111 "data_size": 65536 00:08:32.111 }, 00:08:32.111 { 00:08:32.111 "name": "BaseBdev2", 00:08:32.111 "uuid": "8c9e906b-4094-420f-adc6-0f0c4cf59be8", 00:08:32.111 "is_configured": true, 00:08:32.111 "data_offset": 0, 00:08:32.111 "data_size": 65536 00:08:32.111 }, 00:08:32.111 { 00:08:32.111 "name": "BaseBdev3", 00:08:32.111 "uuid": "242f54ae-43ff-44f1-b6e1-deef8a5a2660", 00:08:32.111 "is_configured": true, 00:08:32.111 "data_offset": 0, 00:08:32.111 "data_size": 65536 00:08:32.111 } 00:08:32.111 ] 00:08:32.111 }' 00:08:32.111 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.111 04:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.680 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:32.680 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:32.680 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:32.680 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:32.680 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:32.680 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:32.680 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:32.680 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:32.680 04:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.680 04:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.680 [2024-11-21 04:06:32.454338] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.680 04:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.680 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:32.680 "name": "Existed_Raid", 00:08:32.680 "aliases": [ 00:08:32.680 "8d83a28b-c0bd-43cc-93ce-45a9a6074371" 00:08:32.680 ], 00:08:32.680 "product_name": "Raid Volume", 00:08:32.680 "block_size": 512, 00:08:32.680 "num_blocks": 196608, 00:08:32.680 "uuid": "8d83a28b-c0bd-43cc-93ce-45a9a6074371", 00:08:32.680 "assigned_rate_limits": { 00:08:32.680 "rw_ios_per_sec": 0, 00:08:32.680 "rw_mbytes_per_sec": 0, 00:08:32.680 "r_mbytes_per_sec": 0, 00:08:32.680 "w_mbytes_per_sec": 0 00:08:32.680 }, 00:08:32.680 "claimed": false, 00:08:32.680 "zoned": false, 00:08:32.680 "supported_io_types": { 00:08:32.680 "read": true, 00:08:32.680 "write": true, 00:08:32.680 "unmap": true, 00:08:32.680 "flush": true, 00:08:32.680 "reset": true, 00:08:32.680 "nvme_admin": false, 00:08:32.680 "nvme_io": false, 00:08:32.680 "nvme_io_md": false, 00:08:32.680 "write_zeroes": true, 00:08:32.680 "zcopy": false, 00:08:32.680 "get_zone_info": false, 00:08:32.680 "zone_management": false, 00:08:32.680 "zone_append": false, 00:08:32.680 "compare": false, 00:08:32.680 "compare_and_write": false, 00:08:32.680 "abort": false, 00:08:32.680 "seek_hole": false, 00:08:32.680 "seek_data": false, 00:08:32.680 "copy": false, 00:08:32.680 "nvme_iov_md": false 00:08:32.680 }, 00:08:32.680 "memory_domains": [ 00:08:32.680 { 00:08:32.680 "dma_device_id": "system", 00:08:32.680 "dma_device_type": 1 00:08:32.680 }, 00:08:32.680 { 00:08:32.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.680 "dma_device_type": 2 00:08:32.680 }, 00:08:32.680 { 00:08:32.680 "dma_device_id": "system", 00:08:32.680 "dma_device_type": 1 00:08:32.680 }, 00:08:32.680 { 00:08:32.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.680 "dma_device_type": 2 00:08:32.680 }, 00:08:32.680 { 00:08:32.680 "dma_device_id": "system", 00:08:32.680 "dma_device_type": 1 00:08:32.680 }, 00:08:32.680 { 00:08:32.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.680 "dma_device_type": 2 00:08:32.680 } 00:08:32.680 ], 00:08:32.680 "driver_specific": { 00:08:32.680 "raid": { 00:08:32.680 "uuid": "8d83a28b-c0bd-43cc-93ce-45a9a6074371", 00:08:32.680 "strip_size_kb": 64, 00:08:32.680 "state": "online", 00:08:32.680 "raid_level": "raid0", 00:08:32.680 "superblock": false, 00:08:32.680 "num_base_bdevs": 3, 00:08:32.680 "num_base_bdevs_discovered": 3, 00:08:32.680 "num_base_bdevs_operational": 3, 00:08:32.680 "base_bdevs_list": [ 00:08:32.680 { 00:08:32.680 "name": "BaseBdev1", 00:08:32.680 "uuid": "ce4c71a6-df13-4b12-a714-2d8841d0f3c3", 00:08:32.680 "is_configured": true, 00:08:32.680 "data_offset": 0, 00:08:32.680 "data_size": 65536 00:08:32.680 }, 00:08:32.680 { 00:08:32.680 "name": "BaseBdev2", 00:08:32.680 "uuid": "8c9e906b-4094-420f-adc6-0f0c4cf59be8", 00:08:32.680 "is_configured": true, 00:08:32.680 "data_offset": 0, 00:08:32.680 "data_size": 65536 00:08:32.680 }, 00:08:32.680 { 00:08:32.680 "name": "BaseBdev3", 00:08:32.680 "uuid": "242f54ae-43ff-44f1-b6e1-deef8a5a2660", 00:08:32.680 "is_configured": true, 00:08:32.680 "data_offset": 0, 00:08:32.680 "data_size": 65536 00:08:32.680 } 00:08:32.680 ] 00:08:32.680 } 00:08:32.680 } 00:08:32.680 }' 00:08:32.681 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:32.681 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:32.681 BaseBdev2 00:08:32.681 BaseBdev3' 00:08:32.681 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.681 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:32.681 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.681 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:32.681 04:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.681 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.681 04:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.681 04:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.681 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.681 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.681 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.681 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:32.681 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.681 04:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.681 04:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.681 04:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.941 [2024-11-21 04:06:32.717607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:32.941 [2024-11-21 04:06:32.717701] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:32.941 [2024-11-21 04:06:32.717783] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.941 "name": "Existed_Raid", 00:08:32.941 "uuid": "8d83a28b-c0bd-43cc-93ce-45a9a6074371", 00:08:32.941 "strip_size_kb": 64, 00:08:32.941 "state": "offline", 00:08:32.941 "raid_level": "raid0", 00:08:32.941 "superblock": false, 00:08:32.941 "num_base_bdevs": 3, 00:08:32.941 "num_base_bdevs_discovered": 2, 00:08:32.941 "num_base_bdevs_operational": 2, 00:08:32.941 "base_bdevs_list": [ 00:08:32.941 { 00:08:32.941 "name": null, 00:08:32.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.941 "is_configured": false, 00:08:32.941 "data_offset": 0, 00:08:32.941 "data_size": 65536 00:08:32.941 }, 00:08:32.941 { 00:08:32.941 "name": "BaseBdev2", 00:08:32.941 "uuid": "8c9e906b-4094-420f-adc6-0f0c4cf59be8", 00:08:32.941 "is_configured": true, 00:08:32.941 "data_offset": 0, 00:08:32.941 "data_size": 65536 00:08:32.941 }, 00:08:32.941 { 00:08:32.941 "name": "BaseBdev3", 00:08:32.941 "uuid": "242f54ae-43ff-44f1-b6e1-deef8a5a2660", 00:08:32.941 "is_configured": true, 00:08:32.941 "data_offset": 0, 00:08:32.941 "data_size": 65536 00:08:32.941 } 00:08:32.941 ] 00:08:32.941 }' 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.941 04:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.200 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:33.200 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:33.200 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.459 [2024-11-21 04:06:33.226076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.459 [2024-11-21 04:06:33.306683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:33.459 [2024-11-21 04:06:33.306743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.459 BaseBdev2 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.459 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.719 [ 00:08:33.719 { 00:08:33.719 "name": "BaseBdev2", 00:08:33.719 "aliases": [ 00:08:33.719 "16ec5b1c-2080-49a3-a1db-9a9312b2d79b" 00:08:33.719 ], 00:08:33.719 "product_name": "Malloc disk", 00:08:33.719 "block_size": 512, 00:08:33.719 "num_blocks": 65536, 00:08:33.719 "uuid": "16ec5b1c-2080-49a3-a1db-9a9312b2d79b", 00:08:33.719 "assigned_rate_limits": { 00:08:33.719 "rw_ios_per_sec": 0, 00:08:33.719 "rw_mbytes_per_sec": 0, 00:08:33.719 "r_mbytes_per_sec": 0, 00:08:33.719 "w_mbytes_per_sec": 0 00:08:33.719 }, 00:08:33.719 "claimed": false, 00:08:33.719 "zoned": false, 00:08:33.719 "supported_io_types": { 00:08:33.719 "read": true, 00:08:33.719 "write": true, 00:08:33.719 "unmap": true, 00:08:33.719 "flush": true, 00:08:33.719 "reset": true, 00:08:33.719 "nvme_admin": false, 00:08:33.719 "nvme_io": false, 00:08:33.719 "nvme_io_md": false, 00:08:33.719 "write_zeroes": true, 00:08:33.719 "zcopy": true, 00:08:33.719 "get_zone_info": false, 00:08:33.719 "zone_management": false, 00:08:33.719 "zone_append": false, 00:08:33.719 "compare": false, 00:08:33.719 "compare_and_write": false, 00:08:33.719 "abort": true, 00:08:33.719 "seek_hole": false, 00:08:33.719 "seek_data": false, 00:08:33.719 "copy": true, 00:08:33.719 "nvme_iov_md": false 00:08:33.719 }, 00:08:33.719 "memory_domains": [ 00:08:33.719 { 00:08:33.719 "dma_device_id": "system", 00:08:33.719 "dma_device_type": 1 00:08:33.719 }, 00:08:33.719 { 00:08:33.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.719 "dma_device_type": 2 00:08:33.719 } 00:08:33.719 ], 00:08:33.719 "driver_specific": {} 00:08:33.719 } 00:08:33.719 ] 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.719 BaseBdev3 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.719 [ 00:08:33.719 { 00:08:33.719 "name": "BaseBdev3", 00:08:33.719 "aliases": [ 00:08:33.719 "b0bc5d0d-121d-461e-abf2-6dda94bc53e1" 00:08:33.719 ], 00:08:33.719 "product_name": "Malloc disk", 00:08:33.719 "block_size": 512, 00:08:33.719 "num_blocks": 65536, 00:08:33.719 "uuid": "b0bc5d0d-121d-461e-abf2-6dda94bc53e1", 00:08:33.719 "assigned_rate_limits": { 00:08:33.719 "rw_ios_per_sec": 0, 00:08:33.719 "rw_mbytes_per_sec": 0, 00:08:33.719 "r_mbytes_per_sec": 0, 00:08:33.719 "w_mbytes_per_sec": 0 00:08:33.719 }, 00:08:33.719 "claimed": false, 00:08:33.719 "zoned": false, 00:08:33.719 "supported_io_types": { 00:08:33.719 "read": true, 00:08:33.719 "write": true, 00:08:33.719 "unmap": true, 00:08:33.719 "flush": true, 00:08:33.719 "reset": true, 00:08:33.719 "nvme_admin": false, 00:08:33.719 "nvme_io": false, 00:08:33.719 "nvme_io_md": false, 00:08:33.719 "write_zeroes": true, 00:08:33.719 "zcopy": true, 00:08:33.719 "get_zone_info": false, 00:08:33.719 "zone_management": false, 00:08:33.719 "zone_append": false, 00:08:33.719 "compare": false, 00:08:33.719 "compare_and_write": false, 00:08:33.719 "abort": true, 00:08:33.719 "seek_hole": false, 00:08:33.719 "seek_data": false, 00:08:33.719 "copy": true, 00:08:33.719 "nvme_iov_md": false 00:08:33.719 }, 00:08:33.719 "memory_domains": [ 00:08:33.719 { 00:08:33.719 "dma_device_id": "system", 00:08:33.719 "dma_device_type": 1 00:08:33.719 }, 00:08:33.719 { 00:08:33.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.719 "dma_device_type": 2 00:08:33.719 } 00:08:33.719 ], 00:08:33.719 "driver_specific": {} 00:08:33.719 } 00:08:33.719 ] 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.719 [2024-11-21 04:06:33.506607] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:33.719 [2024-11-21 04:06:33.506702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:33.719 [2024-11-21 04:06:33.506771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:33.719 [2024-11-21 04:06:33.508979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.719 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.720 "name": "Existed_Raid", 00:08:33.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.720 "strip_size_kb": 64, 00:08:33.720 "state": "configuring", 00:08:33.720 "raid_level": "raid0", 00:08:33.720 "superblock": false, 00:08:33.720 "num_base_bdevs": 3, 00:08:33.720 "num_base_bdevs_discovered": 2, 00:08:33.720 "num_base_bdevs_operational": 3, 00:08:33.720 "base_bdevs_list": [ 00:08:33.720 { 00:08:33.720 "name": "BaseBdev1", 00:08:33.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.720 "is_configured": false, 00:08:33.720 "data_offset": 0, 00:08:33.720 "data_size": 0 00:08:33.720 }, 00:08:33.720 { 00:08:33.720 "name": "BaseBdev2", 00:08:33.720 "uuid": "16ec5b1c-2080-49a3-a1db-9a9312b2d79b", 00:08:33.720 "is_configured": true, 00:08:33.720 "data_offset": 0, 00:08:33.720 "data_size": 65536 00:08:33.720 }, 00:08:33.720 { 00:08:33.720 "name": "BaseBdev3", 00:08:33.720 "uuid": "b0bc5d0d-121d-461e-abf2-6dda94bc53e1", 00:08:33.720 "is_configured": true, 00:08:33.720 "data_offset": 0, 00:08:33.720 "data_size": 65536 00:08:33.720 } 00:08:33.720 ] 00:08:33.720 }' 00:08:33.720 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.720 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.288 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:34.288 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.288 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.288 [2024-11-21 04:06:33.961832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:34.288 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.288 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:34.288 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.288 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.288 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.288 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.288 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.288 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.288 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.288 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.288 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.288 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.288 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.288 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.288 04:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.288 04:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.288 04:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.288 "name": "Existed_Raid", 00:08:34.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.288 "strip_size_kb": 64, 00:08:34.288 "state": "configuring", 00:08:34.288 "raid_level": "raid0", 00:08:34.288 "superblock": false, 00:08:34.288 "num_base_bdevs": 3, 00:08:34.288 "num_base_bdevs_discovered": 1, 00:08:34.288 "num_base_bdevs_operational": 3, 00:08:34.288 "base_bdevs_list": [ 00:08:34.288 { 00:08:34.288 "name": "BaseBdev1", 00:08:34.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.288 "is_configured": false, 00:08:34.288 "data_offset": 0, 00:08:34.288 "data_size": 0 00:08:34.288 }, 00:08:34.288 { 00:08:34.288 "name": null, 00:08:34.288 "uuid": "16ec5b1c-2080-49a3-a1db-9a9312b2d79b", 00:08:34.288 "is_configured": false, 00:08:34.288 "data_offset": 0, 00:08:34.288 "data_size": 65536 00:08:34.288 }, 00:08:34.288 { 00:08:34.288 "name": "BaseBdev3", 00:08:34.288 "uuid": "b0bc5d0d-121d-461e-abf2-6dda94bc53e1", 00:08:34.288 "is_configured": true, 00:08:34.288 "data_offset": 0, 00:08:34.288 "data_size": 65536 00:08:34.288 } 00:08:34.288 ] 00:08:34.288 }' 00:08:34.288 04:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.288 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.547 [2024-11-21 04:06:34.450125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:34.547 BaseBdev1 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.547 [ 00:08:34.547 { 00:08:34.547 "name": "BaseBdev1", 00:08:34.547 "aliases": [ 00:08:34.547 "2c4aeea7-6baa-468b-810e-efcbdd680fad" 00:08:34.547 ], 00:08:34.547 "product_name": "Malloc disk", 00:08:34.547 "block_size": 512, 00:08:34.547 "num_blocks": 65536, 00:08:34.547 "uuid": "2c4aeea7-6baa-468b-810e-efcbdd680fad", 00:08:34.547 "assigned_rate_limits": { 00:08:34.547 "rw_ios_per_sec": 0, 00:08:34.547 "rw_mbytes_per_sec": 0, 00:08:34.547 "r_mbytes_per_sec": 0, 00:08:34.547 "w_mbytes_per_sec": 0 00:08:34.547 }, 00:08:34.547 "claimed": true, 00:08:34.547 "claim_type": "exclusive_write", 00:08:34.547 "zoned": false, 00:08:34.547 "supported_io_types": { 00:08:34.547 "read": true, 00:08:34.547 "write": true, 00:08:34.547 "unmap": true, 00:08:34.547 "flush": true, 00:08:34.547 "reset": true, 00:08:34.547 "nvme_admin": false, 00:08:34.547 "nvme_io": false, 00:08:34.547 "nvme_io_md": false, 00:08:34.547 "write_zeroes": true, 00:08:34.547 "zcopy": true, 00:08:34.547 "get_zone_info": false, 00:08:34.547 "zone_management": false, 00:08:34.547 "zone_append": false, 00:08:34.547 "compare": false, 00:08:34.547 "compare_and_write": false, 00:08:34.547 "abort": true, 00:08:34.547 "seek_hole": false, 00:08:34.547 "seek_data": false, 00:08:34.547 "copy": true, 00:08:34.547 "nvme_iov_md": false 00:08:34.547 }, 00:08:34.547 "memory_domains": [ 00:08:34.547 { 00:08:34.547 "dma_device_id": "system", 00:08:34.547 "dma_device_type": 1 00:08:34.547 }, 00:08:34.547 { 00:08:34.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.547 "dma_device_type": 2 00:08:34.547 } 00:08:34.547 ], 00:08:34.547 "driver_specific": {} 00:08:34.547 } 00:08:34.547 ] 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.547 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.806 04:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.806 "name": "Existed_Raid", 00:08:34.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.806 "strip_size_kb": 64, 00:08:34.806 "state": "configuring", 00:08:34.806 "raid_level": "raid0", 00:08:34.806 "superblock": false, 00:08:34.806 "num_base_bdevs": 3, 00:08:34.806 "num_base_bdevs_discovered": 2, 00:08:34.806 "num_base_bdevs_operational": 3, 00:08:34.806 "base_bdevs_list": [ 00:08:34.806 { 00:08:34.806 "name": "BaseBdev1", 00:08:34.806 "uuid": "2c4aeea7-6baa-468b-810e-efcbdd680fad", 00:08:34.806 "is_configured": true, 00:08:34.806 "data_offset": 0, 00:08:34.806 "data_size": 65536 00:08:34.806 }, 00:08:34.806 { 00:08:34.806 "name": null, 00:08:34.806 "uuid": "16ec5b1c-2080-49a3-a1db-9a9312b2d79b", 00:08:34.806 "is_configured": false, 00:08:34.806 "data_offset": 0, 00:08:34.806 "data_size": 65536 00:08:34.806 }, 00:08:34.806 { 00:08:34.806 "name": "BaseBdev3", 00:08:34.806 "uuid": "b0bc5d0d-121d-461e-abf2-6dda94bc53e1", 00:08:34.806 "is_configured": true, 00:08:34.806 "data_offset": 0, 00:08:34.806 "data_size": 65536 00:08:34.806 } 00:08:34.806 ] 00:08:34.806 }' 00:08:34.806 04:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.806 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.064 04:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.065 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.065 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.065 04:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:35.065 04:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.065 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:35.065 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:35.065 04:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.065 04:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.065 [2024-11-21 04:06:35.021227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:35.065 04:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.065 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:35.065 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.065 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.065 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.065 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.065 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.065 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.065 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.065 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.065 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.065 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.065 04:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.065 04:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.065 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.323 04:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.323 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.323 "name": "Existed_Raid", 00:08:35.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.323 "strip_size_kb": 64, 00:08:35.323 "state": "configuring", 00:08:35.323 "raid_level": "raid0", 00:08:35.323 "superblock": false, 00:08:35.323 "num_base_bdevs": 3, 00:08:35.323 "num_base_bdevs_discovered": 1, 00:08:35.323 "num_base_bdevs_operational": 3, 00:08:35.323 "base_bdevs_list": [ 00:08:35.323 { 00:08:35.323 "name": "BaseBdev1", 00:08:35.323 "uuid": "2c4aeea7-6baa-468b-810e-efcbdd680fad", 00:08:35.324 "is_configured": true, 00:08:35.324 "data_offset": 0, 00:08:35.324 "data_size": 65536 00:08:35.324 }, 00:08:35.324 { 00:08:35.324 "name": null, 00:08:35.324 "uuid": "16ec5b1c-2080-49a3-a1db-9a9312b2d79b", 00:08:35.324 "is_configured": false, 00:08:35.324 "data_offset": 0, 00:08:35.324 "data_size": 65536 00:08:35.324 }, 00:08:35.324 { 00:08:35.324 "name": null, 00:08:35.324 "uuid": "b0bc5d0d-121d-461e-abf2-6dda94bc53e1", 00:08:35.324 "is_configured": false, 00:08:35.324 "data_offset": 0, 00:08:35.324 "data_size": 65536 00:08:35.324 } 00:08:35.324 ] 00:08:35.324 }' 00:08:35.324 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.324 04:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.582 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.582 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:35.582 04:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.582 04:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.582 04:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.582 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:35.582 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:35.582 04:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.582 04:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.582 [2024-11-21 04:06:35.508404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:35.582 04:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.582 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:35.582 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.582 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.582 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.582 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.582 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.582 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.582 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.582 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.582 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.582 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.582 04:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.582 04:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.582 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.582 04:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.841 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.841 "name": "Existed_Raid", 00:08:35.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.841 "strip_size_kb": 64, 00:08:35.841 "state": "configuring", 00:08:35.841 "raid_level": "raid0", 00:08:35.841 "superblock": false, 00:08:35.841 "num_base_bdevs": 3, 00:08:35.841 "num_base_bdevs_discovered": 2, 00:08:35.841 "num_base_bdevs_operational": 3, 00:08:35.841 "base_bdevs_list": [ 00:08:35.841 { 00:08:35.841 "name": "BaseBdev1", 00:08:35.841 "uuid": "2c4aeea7-6baa-468b-810e-efcbdd680fad", 00:08:35.841 "is_configured": true, 00:08:35.841 "data_offset": 0, 00:08:35.841 "data_size": 65536 00:08:35.841 }, 00:08:35.841 { 00:08:35.841 "name": null, 00:08:35.841 "uuid": "16ec5b1c-2080-49a3-a1db-9a9312b2d79b", 00:08:35.841 "is_configured": false, 00:08:35.841 "data_offset": 0, 00:08:35.841 "data_size": 65536 00:08:35.841 }, 00:08:35.841 { 00:08:35.841 "name": "BaseBdev3", 00:08:35.841 "uuid": "b0bc5d0d-121d-461e-abf2-6dda94bc53e1", 00:08:35.841 "is_configured": true, 00:08:35.841 "data_offset": 0, 00:08:35.841 "data_size": 65536 00:08:35.841 } 00:08:35.841 ] 00:08:35.841 }' 00:08:35.841 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.841 04:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.101 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:36.101 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.101 04:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.101 04:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.101 04:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.101 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:36.101 04:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:36.101 04:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.101 04:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.101 [2024-11-21 04:06:36.003648] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:36.101 04:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.101 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:36.101 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.101 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.101 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.101 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.101 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.101 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.101 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.101 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.101 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.101 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.101 04:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.101 04:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.101 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.101 04:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.361 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.361 "name": "Existed_Raid", 00:08:36.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.361 "strip_size_kb": 64, 00:08:36.361 "state": "configuring", 00:08:36.361 "raid_level": "raid0", 00:08:36.361 "superblock": false, 00:08:36.361 "num_base_bdevs": 3, 00:08:36.361 "num_base_bdevs_discovered": 1, 00:08:36.361 "num_base_bdevs_operational": 3, 00:08:36.361 "base_bdevs_list": [ 00:08:36.361 { 00:08:36.361 "name": null, 00:08:36.361 "uuid": "2c4aeea7-6baa-468b-810e-efcbdd680fad", 00:08:36.361 "is_configured": false, 00:08:36.361 "data_offset": 0, 00:08:36.361 "data_size": 65536 00:08:36.361 }, 00:08:36.361 { 00:08:36.361 "name": null, 00:08:36.361 "uuid": "16ec5b1c-2080-49a3-a1db-9a9312b2d79b", 00:08:36.361 "is_configured": false, 00:08:36.361 "data_offset": 0, 00:08:36.361 "data_size": 65536 00:08:36.361 }, 00:08:36.361 { 00:08:36.361 "name": "BaseBdev3", 00:08:36.361 "uuid": "b0bc5d0d-121d-461e-abf2-6dda94bc53e1", 00:08:36.361 "is_configured": true, 00:08:36.361 "data_offset": 0, 00:08:36.361 "data_size": 65536 00:08:36.361 } 00:08:36.361 ] 00:08:36.361 }' 00:08:36.361 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.361 04:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.619 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.619 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:36.619 04:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.619 04:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.620 04:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.620 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:36.620 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:36.620 04:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.620 04:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.620 [2024-11-21 04:06:36.491439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:36.620 04:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.620 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:36.620 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.620 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.620 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.620 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.620 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.620 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.620 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.620 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.620 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.620 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.620 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.620 04:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.620 04:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.620 04:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.620 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.620 "name": "Existed_Raid", 00:08:36.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.620 "strip_size_kb": 64, 00:08:36.620 "state": "configuring", 00:08:36.620 "raid_level": "raid0", 00:08:36.620 "superblock": false, 00:08:36.620 "num_base_bdevs": 3, 00:08:36.620 "num_base_bdevs_discovered": 2, 00:08:36.620 "num_base_bdevs_operational": 3, 00:08:36.620 "base_bdevs_list": [ 00:08:36.620 { 00:08:36.620 "name": null, 00:08:36.620 "uuid": "2c4aeea7-6baa-468b-810e-efcbdd680fad", 00:08:36.620 "is_configured": false, 00:08:36.620 "data_offset": 0, 00:08:36.620 "data_size": 65536 00:08:36.620 }, 00:08:36.620 { 00:08:36.620 "name": "BaseBdev2", 00:08:36.620 "uuid": "16ec5b1c-2080-49a3-a1db-9a9312b2d79b", 00:08:36.620 "is_configured": true, 00:08:36.620 "data_offset": 0, 00:08:36.620 "data_size": 65536 00:08:36.620 }, 00:08:36.620 { 00:08:36.620 "name": "BaseBdev3", 00:08:36.620 "uuid": "b0bc5d0d-121d-461e-abf2-6dda94bc53e1", 00:08:36.620 "is_configured": true, 00:08:36.620 "data_offset": 0, 00:08:36.620 "data_size": 65536 00:08:36.620 } 00:08:36.620 ] 00:08:36.620 }' 00:08:36.620 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.620 04:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.199 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.199 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:37.199 04:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.199 04:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.199 04:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.199 04:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:37.199 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:37.199 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.199 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.199 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.199 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.199 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2c4aeea7-6baa-468b-810e-efcbdd680fad 00:08:37.199 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.199 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.199 [2024-11-21 04:06:37.071694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:37.199 [2024-11-21 04:06:37.071799] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:37.199 [2024-11-21 04:06:37.071887] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:37.199 [2024-11-21 04:06:37.072250] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:37.199 [2024-11-21 04:06:37.072460] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:37.199 [2024-11-21 04:06:37.072505] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:37.199 [2024-11-21 04:06:37.072830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.199 NewBaseBdev 00:08:37.199 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.199 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:37.199 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:37.199 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:37.199 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:37.199 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:37.199 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:37.199 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:37.199 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.199 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.199 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.199 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:37.199 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.199 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.199 [ 00:08:37.199 { 00:08:37.199 "name": "NewBaseBdev", 00:08:37.199 "aliases": [ 00:08:37.199 "2c4aeea7-6baa-468b-810e-efcbdd680fad" 00:08:37.199 ], 00:08:37.199 "product_name": "Malloc disk", 00:08:37.199 "block_size": 512, 00:08:37.199 "num_blocks": 65536, 00:08:37.199 "uuid": "2c4aeea7-6baa-468b-810e-efcbdd680fad", 00:08:37.199 "assigned_rate_limits": { 00:08:37.199 "rw_ios_per_sec": 0, 00:08:37.199 "rw_mbytes_per_sec": 0, 00:08:37.199 "r_mbytes_per_sec": 0, 00:08:37.199 "w_mbytes_per_sec": 0 00:08:37.199 }, 00:08:37.199 "claimed": true, 00:08:37.199 "claim_type": "exclusive_write", 00:08:37.199 "zoned": false, 00:08:37.199 "supported_io_types": { 00:08:37.199 "read": true, 00:08:37.199 "write": true, 00:08:37.200 "unmap": true, 00:08:37.200 "flush": true, 00:08:37.200 "reset": true, 00:08:37.200 "nvme_admin": false, 00:08:37.200 "nvme_io": false, 00:08:37.200 "nvme_io_md": false, 00:08:37.200 "write_zeroes": true, 00:08:37.200 "zcopy": true, 00:08:37.200 "get_zone_info": false, 00:08:37.200 "zone_management": false, 00:08:37.200 "zone_append": false, 00:08:37.200 "compare": false, 00:08:37.200 "compare_and_write": false, 00:08:37.200 "abort": true, 00:08:37.200 "seek_hole": false, 00:08:37.200 "seek_data": false, 00:08:37.200 "copy": true, 00:08:37.200 "nvme_iov_md": false 00:08:37.200 }, 00:08:37.200 "memory_domains": [ 00:08:37.200 { 00:08:37.200 "dma_device_id": "system", 00:08:37.200 "dma_device_type": 1 00:08:37.200 }, 00:08:37.200 { 00:08:37.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.200 "dma_device_type": 2 00:08:37.200 } 00:08:37.200 ], 00:08:37.200 "driver_specific": {} 00:08:37.200 } 00:08:37.200 ] 00:08:37.200 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.200 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:37.200 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:37.200 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.200 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.200 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.200 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.200 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.200 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.200 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.200 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.200 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.200 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.200 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.200 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.200 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.200 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.200 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.200 "name": "Existed_Raid", 00:08:37.200 "uuid": "1d60cef2-0b30-4b1a-adb4-540ec9944c4a", 00:08:37.200 "strip_size_kb": 64, 00:08:37.200 "state": "online", 00:08:37.200 "raid_level": "raid0", 00:08:37.200 "superblock": false, 00:08:37.200 "num_base_bdevs": 3, 00:08:37.200 "num_base_bdevs_discovered": 3, 00:08:37.200 "num_base_bdevs_operational": 3, 00:08:37.200 "base_bdevs_list": [ 00:08:37.200 { 00:08:37.200 "name": "NewBaseBdev", 00:08:37.200 "uuid": "2c4aeea7-6baa-468b-810e-efcbdd680fad", 00:08:37.200 "is_configured": true, 00:08:37.200 "data_offset": 0, 00:08:37.200 "data_size": 65536 00:08:37.200 }, 00:08:37.200 { 00:08:37.200 "name": "BaseBdev2", 00:08:37.200 "uuid": "16ec5b1c-2080-49a3-a1db-9a9312b2d79b", 00:08:37.200 "is_configured": true, 00:08:37.200 "data_offset": 0, 00:08:37.200 "data_size": 65536 00:08:37.200 }, 00:08:37.200 { 00:08:37.200 "name": "BaseBdev3", 00:08:37.200 "uuid": "b0bc5d0d-121d-461e-abf2-6dda94bc53e1", 00:08:37.200 "is_configured": true, 00:08:37.200 "data_offset": 0, 00:08:37.200 "data_size": 65536 00:08:37.200 } 00:08:37.200 ] 00:08:37.200 }' 00:08:37.200 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.200 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.769 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:37.769 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:37.769 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:37.769 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:37.769 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:37.769 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:37.769 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:37.769 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.769 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.769 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:37.769 [2024-11-21 04:06:37.547347] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:37.769 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.769 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:37.769 "name": "Existed_Raid", 00:08:37.769 "aliases": [ 00:08:37.769 "1d60cef2-0b30-4b1a-adb4-540ec9944c4a" 00:08:37.769 ], 00:08:37.769 "product_name": "Raid Volume", 00:08:37.769 "block_size": 512, 00:08:37.769 "num_blocks": 196608, 00:08:37.769 "uuid": "1d60cef2-0b30-4b1a-adb4-540ec9944c4a", 00:08:37.769 "assigned_rate_limits": { 00:08:37.769 "rw_ios_per_sec": 0, 00:08:37.769 "rw_mbytes_per_sec": 0, 00:08:37.769 "r_mbytes_per_sec": 0, 00:08:37.769 "w_mbytes_per_sec": 0 00:08:37.769 }, 00:08:37.769 "claimed": false, 00:08:37.769 "zoned": false, 00:08:37.769 "supported_io_types": { 00:08:37.769 "read": true, 00:08:37.769 "write": true, 00:08:37.769 "unmap": true, 00:08:37.769 "flush": true, 00:08:37.769 "reset": true, 00:08:37.769 "nvme_admin": false, 00:08:37.769 "nvme_io": false, 00:08:37.769 "nvme_io_md": false, 00:08:37.769 "write_zeroes": true, 00:08:37.769 "zcopy": false, 00:08:37.769 "get_zone_info": false, 00:08:37.769 "zone_management": false, 00:08:37.769 "zone_append": false, 00:08:37.769 "compare": false, 00:08:37.769 "compare_and_write": false, 00:08:37.769 "abort": false, 00:08:37.769 "seek_hole": false, 00:08:37.769 "seek_data": false, 00:08:37.769 "copy": false, 00:08:37.769 "nvme_iov_md": false 00:08:37.769 }, 00:08:37.769 "memory_domains": [ 00:08:37.769 { 00:08:37.769 "dma_device_id": "system", 00:08:37.769 "dma_device_type": 1 00:08:37.769 }, 00:08:37.769 { 00:08:37.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.769 "dma_device_type": 2 00:08:37.769 }, 00:08:37.769 { 00:08:37.769 "dma_device_id": "system", 00:08:37.769 "dma_device_type": 1 00:08:37.769 }, 00:08:37.769 { 00:08:37.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.769 "dma_device_type": 2 00:08:37.769 }, 00:08:37.769 { 00:08:37.769 "dma_device_id": "system", 00:08:37.769 "dma_device_type": 1 00:08:37.769 }, 00:08:37.769 { 00:08:37.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.769 "dma_device_type": 2 00:08:37.769 } 00:08:37.769 ], 00:08:37.769 "driver_specific": { 00:08:37.769 "raid": { 00:08:37.769 "uuid": "1d60cef2-0b30-4b1a-adb4-540ec9944c4a", 00:08:37.769 "strip_size_kb": 64, 00:08:37.769 "state": "online", 00:08:37.769 "raid_level": "raid0", 00:08:37.769 "superblock": false, 00:08:37.769 "num_base_bdevs": 3, 00:08:37.769 "num_base_bdevs_discovered": 3, 00:08:37.769 "num_base_bdevs_operational": 3, 00:08:37.769 "base_bdevs_list": [ 00:08:37.769 { 00:08:37.769 "name": "NewBaseBdev", 00:08:37.769 "uuid": "2c4aeea7-6baa-468b-810e-efcbdd680fad", 00:08:37.769 "is_configured": true, 00:08:37.769 "data_offset": 0, 00:08:37.769 "data_size": 65536 00:08:37.769 }, 00:08:37.769 { 00:08:37.769 "name": "BaseBdev2", 00:08:37.769 "uuid": "16ec5b1c-2080-49a3-a1db-9a9312b2d79b", 00:08:37.769 "is_configured": true, 00:08:37.769 "data_offset": 0, 00:08:37.769 "data_size": 65536 00:08:37.769 }, 00:08:37.769 { 00:08:37.769 "name": "BaseBdev3", 00:08:37.769 "uuid": "b0bc5d0d-121d-461e-abf2-6dda94bc53e1", 00:08:37.770 "is_configured": true, 00:08:37.770 "data_offset": 0, 00:08:37.770 "data_size": 65536 00:08:37.770 } 00:08:37.770 ] 00:08:37.770 } 00:08:37.770 } 00:08:37.770 }' 00:08:37.770 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:37.770 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:37.770 BaseBdev2 00:08:37.770 BaseBdev3' 00:08:37.770 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.770 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:37.770 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.770 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:37.770 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.770 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.770 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.770 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.028 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.028 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.028 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.029 [2024-11-21 04:06:37.846458] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:38.029 [2024-11-21 04:06:37.846539] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:38.029 [2024-11-21 04:06:37.846675] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:38.029 [2024-11-21 04:06:37.846786] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:38.029 [2024-11-21 04:06:37.846846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75029 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 75029 ']' 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 75029 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75029 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75029' 00:08:38.029 killing process with pid 75029 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 75029 00:08:38.029 [2024-11-21 04:06:37.898204] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:38.029 04:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 75029 00:08:38.029 [2024-11-21 04:06:37.958745] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:38.597 00:08:38.597 real 0m9.304s 00:08:38.597 user 0m15.628s 00:08:38.597 sys 0m1.996s 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.597 ************************************ 00:08:38.597 END TEST raid_state_function_test 00:08:38.597 ************************************ 00:08:38.597 04:06:38 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:38.597 04:06:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:38.597 04:06:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.597 04:06:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:38.597 ************************************ 00:08:38.597 START TEST raid_state_function_test_sb 00:08:38.597 ************************************ 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75639 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75639' 00:08:38.597 Process raid pid: 75639 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75639 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75639 ']' 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.597 04:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.598 04:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.598 04:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.598 [2024-11-21 04:06:38.448921] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:38.598 [2024-11-21 04:06:38.449179] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.856 [2024-11-21 04:06:38.604663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.856 [2024-11-21 04:06:38.648093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.857 [2024-11-21 04:06:38.725909] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.857 [2024-11-21 04:06:38.726028] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.430 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.430 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:39.430 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:39.430 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.430 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.430 [2024-11-21 04:06:39.298760] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:39.430 [2024-11-21 04:06:39.298825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:39.430 [2024-11-21 04:06:39.298843] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.430 [2024-11-21 04:06:39.298856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.430 [2024-11-21 04:06:39.298863] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:39.430 [2024-11-21 04:06:39.298877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:39.430 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.430 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:39.430 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.430 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.430 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.430 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.430 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.430 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.430 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.430 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.430 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.430 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.430 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.430 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.430 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.430 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.430 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.430 "name": "Existed_Raid", 00:08:39.430 "uuid": "80b3c3d6-87bd-47b0-8469-f52c961c67fb", 00:08:39.430 "strip_size_kb": 64, 00:08:39.430 "state": "configuring", 00:08:39.430 "raid_level": "raid0", 00:08:39.430 "superblock": true, 00:08:39.430 "num_base_bdevs": 3, 00:08:39.430 "num_base_bdevs_discovered": 0, 00:08:39.430 "num_base_bdevs_operational": 3, 00:08:39.430 "base_bdevs_list": [ 00:08:39.430 { 00:08:39.430 "name": "BaseBdev1", 00:08:39.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.430 "is_configured": false, 00:08:39.430 "data_offset": 0, 00:08:39.430 "data_size": 0 00:08:39.430 }, 00:08:39.430 { 00:08:39.430 "name": "BaseBdev2", 00:08:39.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.430 "is_configured": false, 00:08:39.430 "data_offset": 0, 00:08:39.430 "data_size": 0 00:08:39.430 }, 00:08:39.430 { 00:08:39.430 "name": "BaseBdev3", 00:08:39.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.430 "is_configured": false, 00:08:39.430 "data_offset": 0, 00:08:39.430 "data_size": 0 00:08:39.430 } 00:08:39.430 ] 00:08:39.430 }' 00:08:39.430 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.430 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.001 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:40.001 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.001 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.001 [2024-11-21 04:06:39.709907] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:40.001 [2024-11-21 04:06:39.710010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:40.001 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.001 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:40.001 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.001 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.001 [2024-11-21 04:06:39.721906] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:40.001 [2024-11-21 04:06:39.722006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:40.001 [2024-11-21 04:06:39.722059] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:40.001 [2024-11-21 04:06:39.722109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:40.001 [2024-11-21 04:06:39.722143] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:40.001 [2024-11-21 04:06:39.722207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:40.001 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.001 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:40.001 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.001 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.001 [2024-11-21 04:06:39.749143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.001 BaseBdev1 00:08:40.001 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.001 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:40.001 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:40.001 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.001 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:40.001 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.001 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.001 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:40.001 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.001 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.001 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.001 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:40.002 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.002 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.002 [ 00:08:40.002 { 00:08:40.002 "name": "BaseBdev1", 00:08:40.002 "aliases": [ 00:08:40.002 "cf56c799-33d5-4a13-9b87-d2a74ea20c68" 00:08:40.002 ], 00:08:40.002 "product_name": "Malloc disk", 00:08:40.002 "block_size": 512, 00:08:40.002 "num_blocks": 65536, 00:08:40.002 "uuid": "cf56c799-33d5-4a13-9b87-d2a74ea20c68", 00:08:40.002 "assigned_rate_limits": { 00:08:40.002 "rw_ios_per_sec": 0, 00:08:40.002 "rw_mbytes_per_sec": 0, 00:08:40.002 "r_mbytes_per_sec": 0, 00:08:40.002 "w_mbytes_per_sec": 0 00:08:40.002 }, 00:08:40.002 "claimed": true, 00:08:40.002 "claim_type": "exclusive_write", 00:08:40.002 "zoned": false, 00:08:40.002 "supported_io_types": { 00:08:40.002 "read": true, 00:08:40.002 "write": true, 00:08:40.002 "unmap": true, 00:08:40.002 "flush": true, 00:08:40.002 "reset": true, 00:08:40.002 "nvme_admin": false, 00:08:40.002 "nvme_io": false, 00:08:40.002 "nvme_io_md": false, 00:08:40.002 "write_zeroes": true, 00:08:40.002 "zcopy": true, 00:08:40.002 "get_zone_info": false, 00:08:40.002 "zone_management": false, 00:08:40.002 "zone_append": false, 00:08:40.002 "compare": false, 00:08:40.002 "compare_and_write": false, 00:08:40.002 "abort": true, 00:08:40.002 "seek_hole": false, 00:08:40.002 "seek_data": false, 00:08:40.002 "copy": true, 00:08:40.002 "nvme_iov_md": false 00:08:40.002 }, 00:08:40.002 "memory_domains": [ 00:08:40.002 { 00:08:40.002 "dma_device_id": "system", 00:08:40.002 "dma_device_type": 1 00:08:40.002 }, 00:08:40.002 { 00:08:40.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.002 "dma_device_type": 2 00:08:40.002 } 00:08:40.002 ], 00:08:40.002 "driver_specific": {} 00:08:40.002 } 00:08:40.002 ] 00:08:40.002 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.002 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:40.002 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:40.002 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.002 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.002 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.002 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.002 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.002 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.002 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.002 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.002 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.002 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.002 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.002 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.002 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.002 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.002 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.002 "name": "Existed_Raid", 00:08:40.002 "uuid": "e096525c-88dd-4fe8-9781-94b533e27505", 00:08:40.002 "strip_size_kb": 64, 00:08:40.002 "state": "configuring", 00:08:40.002 "raid_level": "raid0", 00:08:40.002 "superblock": true, 00:08:40.002 "num_base_bdevs": 3, 00:08:40.002 "num_base_bdevs_discovered": 1, 00:08:40.002 "num_base_bdevs_operational": 3, 00:08:40.002 "base_bdevs_list": [ 00:08:40.002 { 00:08:40.002 "name": "BaseBdev1", 00:08:40.002 "uuid": "cf56c799-33d5-4a13-9b87-d2a74ea20c68", 00:08:40.002 "is_configured": true, 00:08:40.002 "data_offset": 2048, 00:08:40.002 "data_size": 63488 00:08:40.002 }, 00:08:40.002 { 00:08:40.002 "name": "BaseBdev2", 00:08:40.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.002 "is_configured": false, 00:08:40.002 "data_offset": 0, 00:08:40.002 "data_size": 0 00:08:40.002 }, 00:08:40.002 { 00:08:40.002 "name": "BaseBdev3", 00:08:40.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.002 "is_configured": false, 00:08:40.002 "data_offset": 0, 00:08:40.002 "data_size": 0 00:08:40.002 } 00:08:40.002 ] 00:08:40.002 }' 00:08:40.002 04:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.002 04:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.262 [2024-11-21 04:06:40.152579] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:40.262 [2024-11-21 04:06:40.152648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.262 [2024-11-21 04:06:40.160622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.262 [2024-11-21 04:06:40.162888] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:40.262 [2024-11-21 04:06:40.162935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:40.262 [2024-11-21 04:06:40.162945] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:40.262 [2024-11-21 04:06:40.162955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.262 "name": "Existed_Raid", 00:08:40.262 "uuid": "689226b9-38fe-4a6a-965e-73f3dcc05041", 00:08:40.262 "strip_size_kb": 64, 00:08:40.262 "state": "configuring", 00:08:40.262 "raid_level": "raid0", 00:08:40.262 "superblock": true, 00:08:40.262 "num_base_bdevs": 3, 00:08:40.262 "num_base_bdevs_discovered": 1, 00:08:40.262 "num_base_bdevs_operational": 3, 00:08:40.262 "base_bdevs_list": [ 00:08:40.262 { 00:08:40.262 "name": "BaseBdev1", 00:08:40.262 "uuid": "cf56c799-33d5-4a13-9b87-d2a74ea20c68", 00:08:40.262 "is_configured": true, 00:08:40.262 "data_offset": 2048, 00:08:40.262 "data_size": 63488 00:08:40.262 }, 00:08:40.262 { 00:08:40.262 "name": "BaseBdev2", 00:08:40.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.262 "is_configured": false, 00:08:40.262 "data_offset": 0, 00:08:40.262 "data_size": 0 00:08:40.262 }, 00:08:40.262 { 00:08:40.262 "name": "BaseBdev3", 00:08:40.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.262 "is_configured": false, 00:08:40.262 "data_offset": 0, 00:08:40.262 "data_size": 0 00:08:40.262 } 00:08:40.262 ] 00:08:40.262 }' 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.262 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.831 [2024-11-21 04:06:40.588606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.831 BaseBdev2 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.831 [ 00:08:40.831 { 00:08:40.831 "name": "BaseBdev2", 00:08:40.831 "aliases": [ 00:08:40.831 "cf0e7582-9f0e-48e8-8293-b73286545b5d" 00:08:40.831 ], 00:08:40.831 "product_name": "Malloc disk", 00:08:40.831 "block_size": 512, 00:08:40.831 "num_blocks": 65536, 00:08:40.831 "uuid": "cf0e7582-9f0e-48e8-8293-b73286545b5d", 00:08:40.831 "assigned_rate_limits": { 00:08:40.831 "rw_ios_per_sec": 0, 00:08:40.831 "rw_mbytes_per_sec": 0, 00:08:40.831 "r_mbytes_per_sec": 0, 00:08:40.831 "w_mbytes_per_sec": 0 00:08:40.831 }, 00:08:40.831 "claimed": true, 00:08:40.831 "claim_type": "exclusive_write", 00:08:40.831 "zoned": false, 00:08:40.831 "supported_io_types": { 00:08:40.831 "read": true, 00:08:40.831 "write": true, 00:08:40.831 "unmap": true, 00:08:40.831 "flush": true, 00:08:40.831 "reset": true, 00:08:40.831 "nvme_admin": false, 00:08:40.831 "nvme_io": false, 00:08:40.831 "nvme_io_md": false, 00:08:40.831 "write_zeroes": true, 00:08:40.831 "zcopy": true, 00:08:40.831 "get_zone_info": false, 00:08:40.831 "zone_management": false, 00:08:40.831 "zone_append": false, 00:08:40.831 "compare": false, 00:08:40.831 "compare_and_write": false, 00:08:40.831 "abort": true, 00:08:40.831 "seek_hole": false, 00:08:40.831 "seek_data": false, 00:08:40.831 "copy": true, 00:08:40.831 "nvme_iov_md": false 00:08:40.831 }, 00:08:40.831 "memory_domains": [ 00:08:40.831 { 00:08:40.831 "dma_device_id": "system", 00:08:40.831 "dma_device_type": 1 00:08:40.831 }, 00:08:40.831 { 00:08:40.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.831 "dma_device_type": 2 00:08:40.831 } 00:08:40.831 ], 00:08:40.831 "driver_specific": {} 00:08:40.831 } 00:08:40.831 ] 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.831 "name": "Existed_Raid", 00:08:40.831 "uuid": "689226b9-38fe-4a6a-965e-73f3dcc05041", 00:08:40.831 "strip_size_kb": 64, 00:08:40.831 "state": "configuring", 00:08:40.831 "raid_level": "raid0", 00:08:40.831 "superblock": true, 00:08:40.831 "num_base_bdevs": 3, 00:08:40.831 "num_base_bdevs_discovered": 2, 00:08:40.831 "num_base_bdevs_operational": 3, 00:08:40.831 "base_bdevs_list": [ 00:08:40.831 { 00:08:40.831 "name": "BaseBdev1", 00:08:40.831 "uuid": "cf56c799-33d5-4a13-9b87-d2a74ea20c68", 00:08:40.831 "is_configured": true, 00:08:40.831 "data_offset": 2048, 00:08:40.831 "data_size": 63488 00:08:40.831 }, 00:08:40.831 { 00:08:40.831 "name": "BaseBdev2", 00:08:40.831 "uuid": "cf0e7582-9f0e-48e8-8293-b73286545b5d", 00:08:40.831 "is_configured": true, 00:08:40.831 "data_offset": 2048, 00:08:40.831 "data_size": 63488 00:08:40.831 }, 00:08:40.831 { 00:08:40.831 "name": "BaseBdev3", 00:08:40.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.831 "is_configured": false, 00:08:40.831 "data_offset": 0, 00:08:40.831 "data_size": 0 00:08:40.831 } 00:08:40.831 ] 00:08:40.831 }' 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.831 04:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.402 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:41.402 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.402 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.402 [2024-11-21 04:06:41.109720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:41.402 [2024-11-21 04:06:41.109986] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:41.402 [2024-11-21 04:06:41.110019] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:41.402 BaseBdev3 00:08:41.402 [2024-11-21 04:06:41.110486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:41.402 [2024-11-21 04:06:41.110667] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:41.402 [2024-11-21 04:06:41.110688] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:41.402 [2024-11-21 04:06:41.110862] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.402 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.402 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:41.402 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:41.402 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:41.402 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:41.402 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:41.402 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:41.402 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:41.402 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.402 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.402 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.402 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:41.402 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.402 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.402 [ 00:08:41.402 { 00:08:41.402 "name": "BaseBdev3", 00:08:41.402 "aliases": [ 00:08:41.402 "33bf3611-af32-40bd-8bd5-d125144e35d9" 00:08:41.402 ], 00:08:41.402 "product_name": "Malloc disk", 00:08:41.402 "block_size": 512, 00:08:41.402 "num_blocks": 65536, 00:08:41.402 "uuid": "33bf3611-af32-40bd-8bd5-d125144e35d9", 00:08:41.402 "assigned_rate_limits": { 00:08:41.402 "rw_ios_per_sec": 0, 00:08:41.402 "rw_mbytes_per_sec": 0, 00:08:41.402 "r_mbytes_per_sec": 0, 00:08:41.402 "w_mbytes_per_sec": 0 00:08:41.402 }, 00:08:41.402 "claimed": true, 00:08:41.402 "claim_type": "exclusive_write", 00:08:41.402 "zoned": false, 00:08:41.402 "supported_io_types": { 00:08:41.402 "read": true, 00:08:41.402 "write": true, 00:08:41.402 "unmap": true, 00:08:41.402 "flush": true, 00:08:41.402 "reset": true, 00:08:41.402 "nvme_admin": false, 00:08:41.402 "nvme_io": false, 00:08:41.402 "nvme_io_md": false, 00:08:41.402 "write_zeroes": true, 00:08:41.402 "zcopy": true, 00:08:41.402 "get_zone_info": false, 00:08:41.402 "zone_management": false, 00:08:41.402 "zone_append": false, 00:08:41.402 "compare": false, 00:08:41.402 "compare_and_write": false, 00:08:41.402 "abort": true, 00:08:41.402 "seek_hole": false, 00:08:41.402 "seek_data": false, 00:08:41.402 "copy": true, 00:08:41.402 "nvme_iov_md": false 00:08:41.402 }, 00:08:41.402 "memory_domains": [ 00:08:41.402 { 00:08:41.402 "dma_device_id": "system", 00:08:41.402 "dma_device_type": 1 00:08:41.402 }, 00:08:41.402 { 00:08:41.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.402 "dma_device_type": 2 00:08:41.402 } 00:08:41.403 ], 00:08:41.403 "driver_specific": {} 00:08:41.403 } 00:08:41.403 ] 00:08:41.403 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.403 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:41.403 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:41.403 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:41.403 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:41.403 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.403 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.403 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.403 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.403 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.403 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.403 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.403 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.403 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.403 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.403 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.403 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.403 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.403 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.403 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.403 "name": "Existed_Raid", 00:08:41.403 "uuid": "689226b9-38fe-4a6a-965e-73f3dcc05041", 00:08:41.403 "strip_size_kb": 64, 00:08:41.403 "state": "online", 00:08:41.403 "raid_level": "raid0", 00:08:41.403 "superblock": true, 00:08:41.403 "num_base_bdevs": 3, 00:08:41.403 "num_base_bdevs_discovered": 3, 00:08:41.403 "num_base_bdevs_operational": 3, 00:08:41.403 "base_bdevs_list": [ 00:08:41.403 { 00:08:41.403 "name": "BaseBdev1", 00:08:41.403 "uuid": "cf56c799-33d5-4a13-9b87-d2a74ea20c68", 00:08:41.403 "is_configured": true, 00:08:41.403 "data_offset": 2048, 00:08:41.403 "data_size": 63488 00:08:41.403 }, 00:08:41.403 { 00:08:41.403 "name": "BaseBdev2", 00:08:41.403 "uuid": "cf0e7582-9f0e-48e8-8293-b73286545b5d", 00:08:41.403 "is_configured": true, 00:08:41.403 "data_offset": 2048, 00:08:41.403 "data_size": 63488 00:08:41.403 }, 00:08:41.403 { 00:08:41.403 "name": "BaseBdev3", 00:08:41.403 "uuid": "33bf3611-af32-40bd-8bd5-d125144e35d9", 00:08:41.403 "is_configured": true, 00:08:41.403 "data_offset": 2048, 00:08:41.403 "data_size": 63488 00:08:41.403 } 00:08:41.403 ] 00:08:41.403 }' 00:08:41.403 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.403 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.662 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:41.662 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:41.662 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:41.662 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:41.662 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:41.662 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:41.662 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:41.662 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:41.663 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.663 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.663 [2024-11-21 04:06:41.625288] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:41.923 "name": "Existed_Raid", 00:08:41.923 "aliases": [ 00:08:41.923 "689226b9-38fe-4a6a-965e-73f3dcc05041" 00:08:41.923 ], 00:08:41.923 "product_name": "Raid Volume", 00:08:41.923 "block_size": 512, 00:08:41.923 "num_blocks": 190464, 00:08:41.923 "uuid": "689226b9-38fe-4a6a-965e-73f3dcc05041", 00:08:41.923 "assigned_rate_limits": { 00:08:41.923 "rw_ios_per_sec": 0, 00:08:41.923 "rw_mbytes_per_sec": 0, 00:08:41.923 "r_mbytes_per_sec": 0, 00:08:41.923 "w_mbytes_per_sec": 0 00:08:41.923 }, 00:08:41.923 "claimed": false, 00:08:41.923 "zoned": false, 00:08:41.923 "supported_io_types": { 00:08:41.923 "read": true, 00:08:41.923 "write": true, 00:08:41.923 "unmap": true, 00:08:41.923 "flush": true, 00:08:41.923 "reset": true, 00:08:41.923 "nvme_admin": false, 00:08:41.923 "nvme_io": false, 00:08:41.923 "nvme_io_md": false, 00:08:41.923 "write_zeroes": true, 00:08:41.923 "zcopy": false, 00:08:41.923 "get_zone_info": false, 00:08:41.923 "zone_management": false, 00:08:41.923 "zone_append": false, 00:08:41.923 "compare": false, 00:08:41.923 "compare_and_write": false, 00:08:41.923 "abort": false, 00:08:41.923 "seek_hole": false, 00:08:41.923 "seek_data": false, 00:08:41.923 "copy": false, 00:08:41.923 "nvme_iov_md": false 00:08:41.923 }, 00:08:41.923 "memory_domains": [ 00:08:41.923 { 00:08:41.923 "dma_device_id": "system", 00:08:41.923 "dma_device_type": 1 00:08:41.923 }, 00:08:41.923 { 00:08:41.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.923 "dma_device_type": 2 00:08:41.923 }, 00:08:41.923 { 00:08:41.923 "dma_device_id": "system", 00:08:41.923 "dma_device_type": 1 00:08:41.923 }, 00:08:41.923 { 00:08:41.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.923 "dma_device_type": 2 00:08:41.923 }, 00:08:41.923 { 00:08:41.923 "dma_device_id": "system", 00:08:41.923 "dma_device_type": 1 00:08:41.923 }, 00:08:41.923 { 00:08:41.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.923 "dma_device_type": 2 00:08:41.923 } 00:08:41.923 ], 00:08:41.923 "driver_specific": { 00:08:41.923 "raid": { 00:08:41.923 "uuid": "689226b9-38fe-4a6a-965e-73f3dcc05041", 00:08:41.923 "strip_size_kb": 64, 00:08:41.923 "state": "online", 00:08:41.923 "raid_level": "raid0", 00:08:41.923 "superblock": true, 00:08:41.923 "num_base_bdevs": 3, 00:08:41.923 "num_base_bdevs_discovered": 3, 00:08:41.923 "num_base_bdevs_operational": 3, 00:08:41.923 "base_bdevs_list": [ 00:08:41.923 { 00:08:41.923 "name": "BaseBdev1", 00:08:41.923 "uuid": "cf56c799-33d5-4a13-9b87-d2a74ea20c68", 00:08:41.923 "is_configured": true, 00:08:41.923 "data_offset": 2048, 00:08:41.923 "data_size": 63488 00:08:41.923 }, 00:08:41.923 { 00:08:41.923 "name": "BaseBdev2", 00:08:41.923 "uuid": "cf0e7582-9f0e-48e8-8293-b73286545b5d", 00:08:41.923 "is_configured": true, 00:08:41.923 "data_offset": 2048, 00:08:41.923 "data_size": 63488 00:08:41.923 }, 00:08:41.923 { 00:08:41.923 "name": "BaseBdev3", 00:08:41.923 "uuid": "33bf3611-af32-40bd-8bd5-d125144e35d9", 00:08:41.923 "is_configured": true, 00:08:41.923 "data_offset": 2048, 00:08:41.923 "data_size": 63488 00:08:41.923 } 00:08:41.923 ] 00:08:41.923 } 00:08:41.923 } 00:08:41.923 }' 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:41.923 BaseBdev2 00:08:41.923 BaseBdev3' 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.923 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.923 [2024-11-21 04:06:41.884537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:41.923 [2024-11-21 04:06:41.884569] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.923 [2024-11-21 04:06:41.884641] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.183 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.183 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:42.183 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:42.183 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:42.183 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:42.183 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:42.183 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:42.183 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.183 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:42.183 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.183 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.183 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:42.184 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.184 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.184 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.184 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.184 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.184 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.184 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.184 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.184 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.184 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.184 "name": "Existed_Raid", 00:08:42.184 "uuid": "689226b9-38fe-4a6a-965e-73f3dcc05041", 00:08:42.184 "strip_size_kb": 64, 00:08:42.184 "state": "offline", 00:08:42.184 "raid_level": "raid0", 00:08:42.184 "superblock": true, 00:08:42.184 "num_base_bdevs": 3, 00:08:42.184 "num_base_bdevs_discovered": 2, 00:08:42.184 "num_base_bdevs_operational": 2, 00:08:42.184 "base_bdevs_list": [ 00:08:42.184 { 00:08:42.184 "name": null, 00:08:42.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.184 "is_configured": false, 00:08:42.184 "data_offset": 0, 00:08:42.184 "data_size": 63488 00:08:42.184 }, 00:08:42.184 { 00:08:42.184 "name": "BaseBdev2", 00:08:42.184 "uuid": "cf0e7582-9f0e-48e8-8293-b73286545b5d", 00:08:42.184 "is_configured": true, 00:08:42.184 "data_offset": 2048, 00:08:42.184 "data_size": 63488 00:08:42.184 }, 00:08:42.184 { 00:08:42.184 "name": "BaseBdev3", 00:08:42.184 "uuid": "33bf3611-af32-40bd-8bd5-d125144e35d9", 00:08:42.184 "is_configured": true, 00:08:42.184 "data_offset": 2048, 00:08:42.184 "data_size": 63488 00:08:42.184 } 00:08:42.184 ] 00:08:42.184 }' 00:08:42.184 04:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.184 04:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.444 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:42.444 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.444 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:42.444 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.444 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.444 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.444 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.444 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:42.444 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:42.444 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:42.444 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.444 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.444 [2024-11-21 04:06:42.368753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:42.444 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.444 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:42.444 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.444 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.444 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.444 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.444 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:42.444 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.704 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:42.704 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:42.704 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:42.704 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.704 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.704 [2024-11-21 04:06:42.449662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:42.704 [2024-11-21 04:06:42.449720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:42.704 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.704 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:42.704 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.704 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:42.704 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.704 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.704 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.704 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.704 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:42.704 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:42.704 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:42.704 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:42.704 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:42.704 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:42.704 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.704 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.704 BaseBdev2 00:08:42.704 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.705 [ 00:08:42.705 { 00:08:42.705 "name": "BaseBdev2", 00:08:42.705 "aliases": [ 00:08:42.705 "f08f7ade-04f6-4cb8-9b35-adf1ab32443c" 00:08:42.705 ], 00:08:42.705 "product_name": "Malloc disk", 00:08:42.705 "block_size": 512, 00:08:42.705 "num_blocks": 65536, 00:08:42.705 "uuid": "f08f7ade-04f6-4cb8-9b35-adf1ab32443c", 00:08:42.705 "assigned_rate_limits": { 00:08:42.705 "rw_ios_per_sec": 0, 00:08:42.705 "rw_mbytes_per_sec": 0, 00:08:42.705 "r_mbytes_per_sec": 0, 00:08:42.705 "w_mbytes_per_sec": 0 00:08:42.705 }, 00:08:42.705 "claimed": false, 00:08:42.705 "zoned": false, 00:08:42.705 "supported_io_types": { 00:08:42.705 "read": true, 00:08:42.705 "write": true, 00:08:42.705 "unmap": true, 00:08:42.705 "flush": true, 00:08:42.705 "reset": true, 00:08:42.705 "nvme_admin": false, 00:08:42.705 "nvme_io": false, 00:08:42.705 "nvme_io_md": false, 00:08:42.705 "write_zeroes": true, 00:08:42.705 "zcopy": true, 00:08:42.705 "get_zone_info": false, 00:08:42.705 "zone_management": false, 00:08:42.705 "zone_append": false, 00:08:42.705 "compare": false, 00:08:42.705 "compare_and_write": false, 00:08:42.705 "abort": true, 00:08:42.705 "seek_hole": false, 00:08:42.705 "seek_data": false, 00:08:42.705 "copy": true, 00:08:42.705 "nvme_iov_md": false 00:08:42.705 }, 00:08:42.705 "memory_domains": [ 00:08:42.705 { 00:08:42.705 "dma_device_id": "system", 00:08:42.705 "dma_device_type": 1 00:08:42.705 }, 00:08:42.705 { 00:08:42.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.705 "dma_device_type": 2 00:08:42.705 } 00:08:42.705 ], 00:08:42.705 "driver_specific": {} 00:08:42.705 } 00:08:42.705 ] 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.705 BaseBdev3 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.705 [ 00:08:42.705 { 00:08:42.705 "name": "BaseBdev3", 00:08:42.705 "aliases": [ 00:08:42.705 "95a41d47-4f7a-4e4d-a1a4-c2d6d33f2582" 00:08:42.705 ], 00:08:42.705 "product_name": "Malloc disk", 00:08:42.705 "block_size": 512, 00:08:42.705 "num_blocks": 65536, 00:08:42.705 "uuid": "95a41d47-4f7a-4e4d-a1a4-c2d6d33f2582", 00:08:42.705 "assigned_rate_limits": { 00:08:42.705 "rw_ios_per_sec": 0, 00:08:42.705 "rw_mbytes_per_sec": 0, 00:08:42.705 "r_mbytes_per_sec": 0, 00:08:42.705 "w_mbytes_per_sec": 0 00:08:42.705 }, 00:08:42.705 "claimed": false, 00:08:42.705 "zoned": false, 00:08:42.705 "supported_io_types": { 00:08:42.705 "read": true, 00:08:42.705 "write": true, 00:08:42.705 "unmap": true, 00:08:42.705 "flush": true, 00:08:42.705 "reset": true, 00:08:42.705 "nvme_admin": false, 00:08:42.705 "nvme_io": false, 00:08:42.705 "nvme_io_md": false, 00:08:42.705 "write_zeroes": true, 00:08:42.705 "zcopy": true, 00:08:42.705 "get_zone_info": false, 00:08:42.705 "zone_management": false, 00:08:42.705 "zone_append": false, 00:08:42.705 "compare": false, 00:08:42.705 "compare_and_write": false, 00:08:42.705 "abort": true, 00:08:42.705 "seek_hole": false, 00:08:42.705 "seek_data": false, 00:08:42.705 "copy": true, 00:08:42.705 "nvme_iov_md": false 00:08:42.705 }, 00:08:42.705 "memory_domains": [ 00:08:42.705 { 00:08:42.705 "dma_device_id": "system", 00:08:42.705 "dma_device_type": 1 00:08:42.705 }, 00:08:42.705 { 00:08:42.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.705 "dma_device_type": 2 00:08:42.705 } 00:08:42.705 ], 00:08:42.705 "driver_specific": {} 00:08:42.705 } 00:08:42.705 ] 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.705 [2024-11-21 04:06:42.650290] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:42.705 [2024-11-21 04:06:42.650379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:42.705 [2024-11-21 04:06:42.650435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:42.705 [2024-11-21 04:06:42.652610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.705 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.969 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.969 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.969 "name": "Existed_Raid", 00:08:42.969 "uuid": "bbb25351-5206-4ad3-8253-8717923fd8f5", 00:08:42.969 "strip_size_kb": 64, 00:08:42.969 "state": "configuring", 00:08:42.969 "raid_level": "raid0", 00:08:42.969 "superblock": true, 00:08:42.969 "num_base_bdevs": 3, 00:08:42.969 "num_base_bdevs_discovered": 2, 00:08:42.969 "num_base_bdevs_operational": 3, 00:08:42.969 "base_bdevs_list": [ 00:08:42.969 { 00:08:42.969 "name": "BaseBdev1", 00:08:42.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.969 "is_configured": false, 00:08:42.969 "data_offset": 0, 00:08:42.969 "data_size": 0 00:08:42.969 }, 00:08:42.969 { 00:08:42.969 "name": "BaseBdev2", 00:08:42.969 "uuid": "f08f7ade-04f6-4cb8-9b35-adf1ab32443c", 00:08:42.969 "is_configured": true, 00:08:42.969 "data_offset": 2048, 00:08:42.969 "data_size": 63488 00:08:42.969 }, 00:08:42.969 { 00:08:42.969 "name": "BaseBdev3", 00:08:42.969 "uuid": "95a41d47-4f7a-4e4d-a1a4-c2d6d33f2582", 00:08:42.969 "is_configured": true, 00:08:42.969 "data_offset": 2048, 00:08:42.969 "data_size": 63488 00:08:42.969 } 00:08:42.969 ] 00:08:42.969 }' 00:08:42.969 04:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.969 04:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.231 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:43.231 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.231 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.231 [2024-11-21 04:06:43.113537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:43.231 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.231 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:43.231 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.231 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.231 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.231 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.231 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.231 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.231 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.231 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.231 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.231 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.231 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.231 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.231 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.231 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.231 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.231 "name": "Existed_Raid", 00:08:43.231 "uuid": "bbb25351-5206-4ad3-8253-8717923fd8f5", 00:08:43.231 "strip_size_kb": 64, 00:08:43.231 "state": "configuring", 00:08:43.231 "raid_level": "raid0", 00:08:43.231 "superblock": true, 00:08:43.231 "num_base_bdevs": 3, 00:08:43.231 "num_base_bdevs_discovered": 1, 00:08:43.231 "num_base_bdevs_operational": 3, 00:08:43.231 "base_bdevs_list": [ 00:08:43.231 { 00:08:43.231 "name": "BaseBdev1", 00:08:43.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.231 "is_configured": false, 00:08:43.231 "data_offset": 0, 00:08:43.231 "data_size": 0 00:08:43.231 }, 00:08:43.231 { 00:08:43.231 "name": null, 00:08:43.231 "uuid": "f08f7ade-04f6-4cb8-9b35-adf1ab32443c", 00:08:43.231 "is_configured": false, 00:08:43.231 "data_offset": 0, 00:08:43.231 "data_size": 63488 00:08:43.231 }, 00:08:43.231 { 00:08:43.231 "name": "BaseBdev3", 00:08:43.231 "uuid": "95a41d47-4f7a-4e4d-a1a4-c2d6d33f2582", 00:08:43.231 "is_configured": true, 00:08:43.231 "data_offset": 2048, 00:08:43.231 "data_size": 63488 00:08:43.231 } 00:08:43.231 ] 00:08:43.231 }' 00:08:43.231 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.232 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.808 [2024-11-21 04:06:43.558113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:43.808 BaseBdev1 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.808 [ 00:08:43.808 { 00:08:43.808 "name": "BaseBdev1", 00:08:43.808 "aliases": [ 00:08:43.808 "a258b5fe-4eff-4afb-b315-32e028ea33eb" 00:08:43.808 ], 00:08:43.808 "product_name": "Malloc disk", 00:08:43.808 "block_size": 512, 00:08:43.808 "num_blocks": 65536, 00:08:43.808 "uuid": "a258b5fe-4eff-4afb-b315-32e028ea33eb", 00:08:43.808 "assigned_rate_limits": { 00:08:43.808 "rw_ios_per_sec": 0, 00:08:43.808 "rw_mbytes_per_sec": 0, 00:08:43.808 "r_mbytes_per_sec": 0, 00:08:43.808 "w_mbytes_per_sec": 0 00:08:43.808 }, 00:08:43.808 "claimed": true, 00:08:43.808 "claim_type": "exclusive_write", 00:08:43.808 "zoned": false, 00:08:43.808 "supported_io_types": { 00:08:43.808 "read": true, 00:08:43.808 "write": true, 00:08:43.808 "unmap": true, 00:08:43.808 "flush": true, 00:08:43.808 "reset": true, 00:08:43.808 "nvme_admin": false, 00:08:43.808 "nvme_io": false, 00:08:43.808 "nvme_io_md": false, 00:08:43.808 "write_zeroes": true, 00:08:43.808 "zcopy": true, 00:08:43.808 "get_zone_info": false, 00:08:43.808 "zone_management": false, 00:08:43.808 "zone_append": false, 00:08:43.808 "compare": false, 00:08:43.808 "compare_and_write": false, 00:08:43.808 "abort": true, 00:08:43.808 "seek_hole": false, 00:08:43.808 "seek_data": false, 00:08:43.808 "copy": true, 00:08:43.808 "nvme_iov_md": false 00:08:43.808 }, 00:08:43.808 "memory_domains": [ 00:08:43.808 { 00:08:43.808 "dma_device_id": "system", 00:08:43.808 "dma_device_type": 1 00:08:43.808 }, 00:08:43.808 { 00:08:43.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.808 "dma_device_type": 2 00:08:43.808 } 00:08:43.808 ], 00:08:43.808 "driver_specific": {} 00:08:43.808 } 00:08:43.808 ] 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.808 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.809 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.809 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.809 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.809 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.809 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.809 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.809 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.809 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.809 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.809 "name": "Existed_Raid", 00:08:43.809 "uuid": "bbb25351-5206-4ad3-8253-8717923fd8f5", 00:08:43.809 "strip_size_kb": 64, 00:08:43.809 "state": "configuring", 00:08:43.809 "raid_level": "raid0", 00:08:43.809 "superblock": true, 00:08:43.809 "num_base_bdevs": 3, 00:08:43.809 "num_base_bdevs_discovered": 2, 00:08:43.809 "num_base_bdevs_operational": 3, 00:08:43.809 "base_bdevs_list": [ 00:08:43.809 { 00:08:43.809 "name": "BaseBdev1", 00:08:43.809 "uuid": "a258b5fe-4eff-4afb-b315-32e028ea33eb", 00:08:43.809 "is_configured": true, 00:08:43.809 "data_offset": 2048, 00:08:43.809 "data_size": 63488 00:08:43.809 }, 00:08:43.809 { 00:08:43.809 "name": null, 00:08:43.809 "uuid": "f08f7ade-04f6-4cb8-9b35-adf1ab32443c", 00:08:43.809 "is_configured": false, 00:08:43.809 "data_offset": 0, 00:08:43.809 "data_size": 63488 00:08:43.809 }, 00:08:43.809 { 00:08:43.809 "name": "BaseBdev3", 00:08:43.809 "uuid": "95a41d47-4f7a-4e4d-a1a4-c2d6d33f2582", 00:08:43.809 "is_configured": true, 00:08:43.809 "data_offset": 2048, 00:08:43.809 "data_size": 63488 00:08:43.809 } 00:08:43.809 ] 00:08:43.809 }' 00:08:43.809 04:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.809 04:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.380 [2024-11-21 04:06:44.105295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.380 "name": "Existed_Raid", 00:08:44.380 "uuid": "bbb25351-5206-4ad3-8253-8717923fd8f5", 00:08:44.380 "strip_size_kb": 64, 00:08:44.380 "state": "configuring", 00:08:44.380 "raid_level": "raid0", 00:08:44.380 "superblock": true, 00:08:44.380 "num_base_bdevs": 3, 00:08:44.380 "num_base_bdevs_discovered": 1, 00:08:44.380 "num_base_bdevs_operational": 3, 00:08:44.380 "base_bdevs_list": [ 00:08:44.380 { 00:08:44.380 "name": "BaseBdev1", 00:08:44.380 "uuid": "a258b5fe-4eff-4afb-b315-32e028ea33eb", 00:08:44.380 "is_configured": true, 00:08:44.380 "data_offset": 2048, 00:08:44.380 "data_size": 63488 00:08:44.380 }, 00:08:44.380 { 00:08:44.380 "name": null, 00:08:44.380 "uuid": "f08f7ade-04f6-4cb8-9b35-adf1ab32443c", 00:08:44.380 "is_configured": false, 00:08:44.380 "data_offset": 0, 00:08:44.380 "data_size": 63488 00:08:44.380 }, 00:08:44.380 { 00:08:44.380 "name": null, 00:08:44.380 "uuid": "95a41d47-4f7a-4e4d-a1a4-c2d6d33f2582", 00:08:44.380 "is_configured": false, 00:08:44.380 "data_offset": 0, 00:08:44.380 "data_size": 63488 00:08:44.380 } 00:08:44.380 ] 00:08:44.380 }' 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.380 04:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.640 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.640 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:44.640 04:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.640 04:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.640 04:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.900 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:44.900 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:44.900 04:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.900 04:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.900 [2024-11-21 04:06:44.620401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:44.900 04:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.900 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:44.900 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.900 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.900 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.900 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.900 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.900 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.900 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.900 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.900 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.900 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.900 04:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.900 04:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.900 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.900 04:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.900 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.900 "name": "Existed_Raid", 00:08:44.900 "uuid": "bbb25351-5206-4ad3-8253-8717923fd8f5", 00:08:44.900 "strip_size_kb": 64, 00:08:44.900 "state": "configuring", 00:08:44.900 "raid_level": "raid0", 00:08:44.900 "superblock": true, 00:08:44.900 "num_base_bdevs": 3, 00:08:44.901 "num_base_bdevs_discovered": 2, 00:08:44.901 "num_base_bdevs_operational": 3, 00:08:44.901 "base_bdevs_list": [ 00:08:44.901 { 00:08:44.901 "name": "BaseBdev1", 00:08:44.901 "uuid": "a258b5fe-4eff-4afb-b315-32e028ea33eb", 00:08:44.901 "is_configured": true, 00:08:44.901 "data_offset": 2048, 00:08:44.901 "data_size": 63488 00:08:44.901 }, 00:08:44.901 { 00:08:44.901 "name": null, 00:08:44.901 "uuid": "f08f7ade-04f6-4cb8-9b35-adf1ab32443c", 00:08:44.901 "is_configured": false, 00:08:44.901 "data_offset": 0, 00:08:44.901 "data_size": 63488 00:08:44.901 }, 00:08:44.901 { 00:08:44.901 "name": "BaseBdev3", 00:08:44.901 "uuid": "95a41d47-4f7a-4e4d-a1a4-c2d6d33f2582", 00:08:44.901 "is_configured": true, 00:08:44.901 "data_offset": 2048, 00:08:44.901 "data_size": 63488 00:08:44.901 } 00:08:44.901 ] 00:08:44.901 }' 00:08:44.901 04:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.901 04:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.161 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.161 04:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.161 04:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.161 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:45.161 04:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.161 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:45.161 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:45.161 04:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.161 04:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.161 [2024-11-21 04:06:45.111656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:45.421 04:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.421 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:45.421 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.421 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.421 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.421 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.421 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.421 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.421 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.421 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.421 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.421 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.421 04:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.421 04:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.421 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.421 04:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.421 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.421 "name": "Existed_Raid", 00:08:45.421 "uuid": "bbb25351-5206-4ad3-8253-8717923fd8f5", 00:08:45.421 "strip_size_kb": 64, 00:08:45.422 "state": "configuring", 00:08:45.422 "raid_level": "raid0", 00:08:45.422 "superblock": true, 00:08:45.422 "num_base_bdevs": 3, 00:08:45.422 "num_base_bdevs_discovered": 1, 00:08:45.422 "num_base_bdevs_operational": 3, 00:08:45.422 "base_bdevs_list": [ 00:08:45.422 { 00:08:45.422 "name": null, 00:08:45.422 "uuid": "a258b5fe-4eff-4afb-b315-32e028ea33eb", 00:08:45.422 "is_configured": false, 00:08:45.422 "data_offset": 0, 00:08:45.422 "data_size": 63488 00:08:45.422 }, 00:08:45.422 { 00:08:45.422 "name": null, 00:08:45.422 "uuid": "f08f7ade-04f6-4cb8-9b35-adf1ab32443c", 00:08:45.422 "is_configured": false, 00:08:45.422 "data_offset": 0, 00:08:45.422 "data_size": 63488 00:08:45.422 }, 00:08:45.422 { 00:08:45.422 "name": "BaseBdev3", 00:08:45.422 "uuid": "95a41d47-4f7a-4e4d-a1a4-c2d6d33f2582", 00:08:45.422 "is_configured": true, 00:08:45.422 "data_offset": 2048, 00:08:45.422 "data_size": 63488 00:08:45.422 } 00:08:45.422 ] 00:08:45.422 }' 00:08:45.422 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.422 04:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.682 [2024-11-21 04:06:45.575482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.682 "name": "Existed_Raid", 00:08:45.682 "uuid": "bbb25351-5206-4ad3-8253-8717923fd8f5", 00:08:45.682 "strip_size_kb": 64, 00:08:45.682 "state": "configuring", 00:08:45.682 "raid_level": "raid0", 00:08:45.682 "superblock": true, 00:08:45.682 "num_base_bdevs": 3, 00:08:45.682 "num_base_bdevs_discovered": 2, 00:08:45.682 "num_base_bdevs_operational": 3, 00:08:45.682 "base_bdevs_list": [ 00:08:45.682 { 00:08:45.682 "name": null, 00:08:45.682 "uuid": "a258b5fe-4eff-4afb-b315-32e028ea33eb", 00:08:45.682 "is_configured": false, 00:08:45.682 "data_offset": 0, 00:08:45.682 "data_size": 63488 00:08:45.682 }, 00:08:45.682 { 00:08:45.682 "name": "BaseBdev2", 00:08:45.682 "uuid": "f08f7ade-04f6-4cb8-9b35-adf1ab32443c", 00:08:45.682 "is_configured": true, 00:08:45.682 "data_offset": 2048, 00:08:45.682 "data_size": 63488 00:08:45.682 }, 00:08:45.682 { 00:08:45.682 "name": "BaseBdev3", 00:08:45.682 "uuid": "95a41d47-4f7a-4e4d-a1a4-c2d6d33f2582", 00:08:45.682 "is_configured": true, 00:08:45.682 "data_offset": 2048, 00:08:45.682 "data_size": 63488 00:08:45.682 } 00:08:45.682 ] 00:08:45.682 }' 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.682 04:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.253 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.253 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:46.253 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.253 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.253 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.253 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:46.253 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.253 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:46.253 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.253 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.253 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.253 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a258b5fe-4eff-4afb-b315-32e028ea33eb 00:08:46.253 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.253 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.253 NewBaseBdev 00:08:46.253 [2024-11-21 04:06:46.131194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:46.253 [2024-11-21 04:06:46.131411] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:46.253 [2024-11-21 04:06:46.131428] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:46.253 [2024-11-21 04:06:46.131699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:46.253 [2024-11-21 04:06:46.131820] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:46.253 [2024-11-21 04:06:46.131829] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:46.253 [2024-11-21 04:06:46.131966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.253 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.253 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:46.253 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:46.253 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:46.253 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:46.253 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:46.253 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:46.253 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:46.253 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.253 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.253 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.253 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:46.254 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.254 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.254 [ 00:08:46.254 { 00:08:46.254 "name": "NewBaseBdev", 00:08:46.254 "aliases": [ 00:08:46.254 "a258b5fe-4eff-4afb-b315-32e028ea33eb" 00:08:46.254 ], 00:08:46.254 "product_name": "Malloc disk", 00:08:46.254 "block_size": 512, 00:08:46.254 "num_blocks": 65536, 00:08:46.254 "uuid": "a258b5fe-4eff-4afb-b315-32e028ea33eb", 00:08:46.254 "assigned_rate_limits": { 00:08:46.254 "rw_ios_per_sec": 0, 00:08:46.254 "rw_mbytes_per_sec": 0, 00:08:46.254 "r_mbytes_per_sec": 0, 00:08:46.254 "w_mbytes_per_sec": 0 00:08:46.254 }, 00:08:46.254 "claimed": true, 00:08:46.254 "claim_type": "exclusive_write", 00:08:46.254 "zoned": false, 00:08:46.254 "supported_io_types": { 00:08:46.254 "read": true, 00:08:46.254 "write": true, 00:08:46.254 "unmap": true, 00:08:46.254 "flush": true, 00:08:46.254 "reset": true, 00:08:46.254 "nvme_admin": false, 00:08:46.254 "nvme_io": false, 00:08:46.254 "nvme_io_md": false, 00:08:46.254 "write_zeroes": true, 00:08:46.254 "zcopy": true, 00:08:46.254 "get_zone_info": false, 00:08:46.254 "zone_management": false, 00:08:46.254 "zone_append": false, 00:08:46.254 "compare": false, 00:08:46.254 "compare_and_write": false, 00:08:46.254 "abort": true, 00:08:46.254 "seek_hole": false, 00:08:46.254 "seek_data": false, 00:08:46.254 "copy": true, 00:08:46.254 "nvme_iov_md": false 00:08:46.254 }, 00:08:46.254 "memory_domains": [ 00:08:46.254 { 00:08:46.254 "dma_device_id": "system", 00:08:46.254 "dma_device_type": 1 00:08:46.254 }, 00:08:46.254 { 00:08:46.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.254 "dma_device_type": 2 00:08:46.254 } 00:08:46.254 ], 00:08:46.254 "driver_specific": {} 00:08:46.254 } 00:08:46.254 ] 00:08:46.254 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.254 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:46.254 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:46.254 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.254 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.254 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.254 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.254 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.254 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.254 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.254 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.254 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.254 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.254 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.254 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.254 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.254 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.254 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.254 "name": "Existed_Raid", 00:08:46.254 "uuid": "bbb25351-5206-4ad3-8253-8717923fd8f5", 00:08:46.254 "strip_size_kb": 64, 00:08:46.254 "state": "online", 00:08:46.254 "raid_level": "raid0", 00:08:46.254 "superblock": true, 00:08:46.254 "num_base_bdevs": 3, 00:08:46.254 "num_base_bdevs_discovered": 3, 00:08:46.254 "num_base_bdevs_operational": 3, 00:08:46.254 "base_bdevs_list": [ 00:08:46.254 { 00:08:46.254 "name": "NewBaseBdev", 00:08:46.254 "uuid": "a258b5fe-4eff-4afb-b315-32e028ea33eb", 00:08:46.254 "is_configured": true, 00:08:46.254 "data_offset": 2048, 00:08:46.254 "data_size": 63488 00:08:46.254 }, 00:08:46.254 { 00:08:46.254 "name": "BaseBdev2", 00:08:46.254 "uuid": "f08f7ade-04f6-4cb8-9b35-adf1ab32443c", 00:08:46.254 "is_configured": true, 00:08:46.254 "data_offset": 2048, 00:08:46.254 "data_size": 63488 00:08:46.254 }, 00:08:46.254 { 00:08:46.254 "name": "BaseBdev3", 00:08:46.254 "uuid": "95a41d47-4f7a-4e4d-a1a4-c2d6d33f2582", 00:08:46.254 "is_configured": true, 00:08:46.254 "data_offset": 2048, 00:08:46.254 "data_size": 63488 00:08:46.254 } 00:08:46.254 ] 00:08:46.254 }' 00:08:46.254 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.254 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.824 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:46.824 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:46.824 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:46.824 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:46.824 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:46.824 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:46.824 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:46.824 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:46.824 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.824 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.824 [2024-11-21 04:06:46.598800] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.824 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.824 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:46.824 "name": "Existed_Raid", 00:08:46.824 "aliases": [ 00:08:46.824 "bbb25351-5206-4ad3-8253-8717923fd8f5" 00:08:46.824 ], 00:08:46.824 "product_name": "Raid Volume", 00:08:46.824 "block_size": 512, 00:08:46.824 "num_blocks": 190464, 00:08:46.824 "uuid": "bbb25351-5206-4ad3-8253-8717923fd8f5", 00:08:46.824 "assigned_rate_limits": { 00:08:46.824 "rw_ios_per_sec": 0, 00:08:46.824 "rw_mbytes_per_sec": 0, 00:08:46.824 "r_mbytes_per_sec": 0, 00:08:46.824 "w_mbytes_per_sec": 0 00:08:46.824 }, 00:08:46.824 "claimed": false, 00:08:46.824 "zoned": false, 00:08:46.824 "supported_io_types": { 00:08:46.824 "read": true, 00:08:46.824 "write": true, 00:08:46.824 "unmap": true, 00:08:46.824 "flush": true, 00:08:46.824 "reset": true, 00:08:46.824 "nvme_admin": false, 00:08:46.824 "nvme_io": false, 00:08:46.824 "nvme_io_md": false, 00:08:46.824 "write_zeroes": true, 00:08:46.824 "zcopy": false, 00:08:46.824 "get_zone_info": false, 00:08:46.824 "zone_management": false, 00:08:46.824 "zone_append": false, 00:08:46.824 "compare": false, 00:08:46.824 "compare_and_write": false, 00:08:46.824 "abort": false, 00:08:46.824 "seek_hole": false, 00:08:46.824 "seek_data": false, 00:08:46.824 "copy": false, 00:08:46.824 "nvme_iov_md": false 00:08:46.824 }, 00:08:46.824 "memory_domains": [ 00:08:46.824 { 00:08:46.824 "dma_device_id": "system", 00:08:46.824 "dma_device_type": 1 00:08:46.824 }, 00:08:46.824 { 00:08:46.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.824 "dma_device_type": 2 00:08:46.824 }, 00:08:46.824 { 00:08:46.824 "dma_device_id": "system", 00:08:46.824 "dma_device_type": 1 00:08:46.824 }, 00:08:46.824 { 00:08:46.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.824 "dma_device_type": 2 00:08:46.824 }, 00:08:46.824 { 00:08:46.824 "dma_device_id": "system", 00:08:46.824 "dma_device_type": 1 00:08:46.824 }, 00:08:46.824 { 00:08:46.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.824 "dma_device_type": 2 00:08:46.824 } 00:08:46.824 ], 00:08:46.824 "driver_specific": { 00:08:46.824 "raid": { 00:08:46.824 "uuid": "bbb25351-5206-4ad3-8253-8717923fd8f5", 00:08:46.824 "strip_size_kb": 64, 00:08:46.824 "state": "online", 00:08:46.824 "raid_level": "raid0", 00:08:46.824 "superblock": true, 00:08:46.824 "num_base_bdevs": 3, 00:08:46.824 "num_base_bdevs_discovered": 3, 00:08:46.824 "num_base_bdevs_operational": 3, 00:08:46.824 "base_bdevs_list": [ 00:08:46.824 { 00:08:46.824 "name": "NewBaseBdev", 00:08:46.824 "uuid": "a258b5fe-4eff-4afb-b315-32e028ea33eb", 00:08:46.824 "is_configured": true, 00:08:46.824 "data_offset": 2048, 00:08:46.824 "data_size": 63488 00:08:46.824 }, 00:08:46.824 { 00:08:46.824 "name": "BaseBdev2", 00:08:46.824 "uuid": "f08f7ade-04f6-4cb8-9b35-adf1ab32443c", 00:08:46.824 "is_configured": true, 00:08:46.824 "data_offset": 2048, 00:08:46.824 "data_size": 63488 00:08:46.824 }, 00:08:46.824 { 00:08:46.824 "name": "BaseBdev3", 00:08:46.824 "uuid": "95a41d47-4f7a-4e4d-a1a4-c2d6d33f2582", 00:08:46.824 "is_configured": true, 00:08:46.824 "data_offset": 2048, 00:08:46.824 "data_size": 63488 00:08:46.824 } 00:08:46.824 ] 00:08:46.824 } 00:08:46.824 } 00:08:46.824 }' 00:08:46.824 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:46.825 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:46.825 BaseBdev2 00:08:46.825 BaseBdev3' 00:08:46.825 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.825 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:46.825 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.825 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:46.825 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.825 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.825 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.825 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.825 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.825 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.825 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.825 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:46.825 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.825 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.825 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.825 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.085 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.085 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.085 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.085 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:47.085 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.085 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.085 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.085 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.085 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.085 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.085 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:47.085 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.085 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.085 [2024-11-21 04:06:46.858017] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:47.085 [2024-11-21 04:06:46.858047] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:47.085 [2024-11-21 04:06:46.858133] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.085 [2024-11-21 04:06:46.858198] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.085 [2024-11-21 04:06:46.858212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:47.085 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.085 04:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75639 00:08:47.085 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75639 ']' 00:08:47.085 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 75639 00:08:47.085 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:47.085 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:47.085 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75639 00:08:47.085 killing process with pid 75639 00:08:47.085 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:47.085 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:47.085 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75639' 00:08:47.085 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 75639 00:08:47.085 [2024-11-21 04:06:46.902694] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:47.085 04:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 75639 00:08:47.085 [2024-11-21 04:06:46.962978] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.344 04:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:47.344 ************************************ 00:08:47.344 END TEST raid_state_function_test_sb 00:08:47.344 ************************************ 00:08:47.344 00:08:47.344 real 0m8.940s 00:08:47.344 user 0m14.934s 00:08:47.344 sys 0m1.943s 00:08:47.344 04:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.344 04:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.603 04:06:47 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:47.603 04:06:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:47.603 04:06:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.603 04:06:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:47.603 ************************************ 00:08:47.603 START TEST raid_superblock_test 00:08:47.603 ************************************ 00:08:47.603 04:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:47.603 04:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:47.603 04:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:47.603 04:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:47.603 04:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:47.603 04:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:47.603 04:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:47.603 04:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:47.603 04:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:47.603 04:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:47.603 04:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:47.603 04:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:47.603 04:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:47.603 04:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:47.603 04:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:47.603 04:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:47.603 04:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:47.603 04:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76248 00:08:47.603 04:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:47.603 04:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76248 00:08:47.603 04:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 76248 ']' 00:08:47.603 04:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.603 04:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.603 04:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.603 04:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.603 04:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.603 [2024-11-21 04:06:47.452207] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:47.603 [2024-11-21 04:06:47.452461] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76248 ] 00:08:47.863 [2024-11-21 04:06:47.607691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.863 [2024-11-21 04:06:47.647853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.863 [2024-11-21 04:06:47.723787] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.863 [2024-11-21 04:06:47.723940] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.432 malloc1 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.432 [2024-11-21 04:06:48.330726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:48.432 [2024-11-21 04:06:48.330804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.432 [2024-11-21 04:06:48.330827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:08:48.432 [2024-11-21 04:06:48.330852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.432 [2024-11-21 04:06:48.333409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.432 [2024-11-21 04:06:48.333496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:48.432 pt1 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.432 malloc2 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.432 [2024-11-21 04:06:48.365389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:48.432 [2024-11-21 04:06:48.365483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.432 [2024-11-21 04:06:48.365536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:48.432 [2024-11-21 04:06:48.365584] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.432 [2024-11-21 04:06:48.368066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.432 [2024-11-21 04:06:48.368191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:48.432 pt2 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.432 malloc3 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.432 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.691 [2024-11-21 04:06:48.403981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:48.691 [2024-11-21 04:06:48.404118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.691 [2024-11-21 04:06:48.404187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:48.692 [2024-11-21 04:06:48.404248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.692 [2024-11-21 04:06:48.406761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.692 [2024-11-21 04:06:48.406837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:48.692 pt3 00:08:48.692 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.692 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:48.692 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:48.692 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:48.692 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.692 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.692 [2024-11-21 04:06:48.416053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:48.692 [2024-11-21 04:06:48.418267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:48.692 [2024-11-21 04:06:48.418390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:48.692 [2024-11-21 04:06:48.418620] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:08:48.692 [2024-11-21 04:06:48.418676] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:48.692 [2024-11-21 04:06:48.419062] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:48.692 [2024-11-21 04:06:48.419300] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:08:48.692 [2024-11-21 04:06:48.419366] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:08:48.692 [2024-11-21 04:06:48.419591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.692 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.692 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:48.692 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.692 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.692 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.692 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.692 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.692 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.692 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.692 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.692 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.692 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.692 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.692 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.692 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.692 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.692 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.692 "name": "raid_bdev1", 00:08:48.692 "uuid": "1d242b06-960a-429b-aaf8-cb5c8618f176", 00:08:48.692 "strip_size_kb": 64, 00:08:48.692 "state": "online", 00:08:48.692 "raid_level": "raid0", 00:08:48.692 "superblock": true, 00:08:48.692 "num_base_bdevs": 3, 00:08:48.692 "num_base_bdevs_discovered": 3, 00:08:48.692 "num_base_bdevs_operational": 3, 00:08:48.692 "base_bdevs_list": [ 00:08:48.692 { 00:08:48.692 "name": "pt1", 00:08:48.692 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:48.692 "is_configured": true, 00:08:48.692 "data_offset": 2048, 00:08:48.692 "data_size": 63488 00:08:48.692 }, 00:08:48.692 { 00:08:48.692 "name": "pt2", 00:08:48.692 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:48.692 "is_configured": true, 00:08:48.692 "data_offset": 2048, 00:08:48.692 "data_size": 63488 00:08:48.692 }, 00:08:48.692 { 00:08:48.692 "name": "pt3", 00:08:48.692 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:48.692 "is_configured": true, 00:08:48.692 "data_offset": 2048, 00:08:48.692 "data_size": 63488 00:08:48.692 } 00:08:48.692 ] 00:08:48.692 }' 00:08:48.692 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.692 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.952 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:48.952 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:48.952 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:48.952 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:48.952 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:48.952 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:48.952 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:48.952 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.952 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.952 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:48.952 [2024-11-21 04:06:48.895503] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:48.952 04:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.212 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:49.212 "name": "raid_bdev1", 00:08:49.212 "aliases": [ 00:08:49.212 "1d242b06-960a-429b-aaf8-cb5c8618f176" 00:08:49.212 ], 00:08:49.212 "product_name": "Raid Volume", 00:08:49.212 "block_size": 512, 00:08:49.212 "num_blocks": 190464, 00:08:49.212 "uuid": "1d242b06-960a-429b-aaf8-cb5c8618f176", 00:08:49.212 "assigned_rate_limits": { 00:08:49.212 "rw_ios_per_sec": 0, 00:08:49.212 "rw_mbytes_per_sec": 0, 00:08:49.212 "r_mbytes_per_sec": 0, 00:08:49.212 "w_mbytes_per_sec": 0 00:08:49.212 }, 00:08:49.212 "claimed": false, 00:08:49.212 "zoned": false, 00:08:49.212 "supported_io_types": { 00:08:49.212 "read": true, 00:08:49.212 "write": true, 00:08:49.212 "unmap": true, 00:08:49.212 "flush": true, 00:08:49.212 "reset": true, 00:08:49.212 "nvme_admin": false, 00:08:49.212 "nvme_io": false, 00:08:49.212 "nvme_io_md": false, 00:08:49.212 "write_zeroes": true, 00:08:49.212 "zcopy": false, 00:08:49.212 "get_zone_info": false, 00:08:49.212 "zone_management": false, 00:08:49.212 "zone_append": false, 00:08:49.212 "compare": false, 00:08:49.212 "compare_and_write": false, 00:08:49.212 "abort": false, 00:08:49.212 "seek_hole": false, 00:08:49.212 "seek_data": false, 00:08:49.212 "copy": false, 00:08:49.212 "nvme_iov_md": false 00:08:49.212 }, 00:08:49.212 "memory_domains": [ 00:08:49.212 { 00:08:49.212 "dma_device_id": "system", 00:08:49.212 "dma_device_type": 1 00:08:49.212 }, 00:08:49.212 { 00:08:49.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.212 "dma_device_type": 2 00:08:49.212 }, 00:08:49.212 { 00:08:49.212 "dma_device_id": "system", 00:08:49.212 "dma_device_type": 1 00:08:49.212 }, 00:08:49.212 { 00:08:49.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.212 "dma_device_type": 2 00:08:49.212 }, 00:08:49.212 { 00:08:49.212 "dma_device_id": "system", 00:08:49.212 "dma_device_type": 1 00:08:49.212 }, 00:08:49.212 { 00:08:49.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.212 "dma_device_type": 2 00:08:49.212 } 00:08:49.212 ], 00:08:49.212 "driver_specific": { 00:08:49.212 "raid": { 00:08:49.212 "uuid": "1d242b06-960a-429b-aaf8-cb5c8618f176", 00:08:49.212 "strip_size_kb": 64, 00:08:49.212 "state": "online", 00:08:49.212 "raid_level": "raid0", 00:08:49.212 "superblock": true, 00:08:49.212 "num_base_bdevs": 3, 00:08:49.212 "num_base_bdevs_discovered": 3, 00:08:49.212 "num_base_bdevs_operational": 3, 00:08:49.212 "base_bdevs_list": [ 00:08:49.212 { 00:08:49.212 "name": "pt1", 00:08:49.212 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:49.212 "is_configured": true, 00:08:49.212 "data_offset": 2048, 00:08:49.212 "data_size": 63488 00:08:49.212 }, 00:08:49.212 { 00:08:49.212 "name": "pt2", 00:08:49.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:49.212 "is_configured": true, 00:08:49.212 "data_offset": 2048, 00:08:49.212 "data_size": 63488 00:08:49.212 }, 00:08:49.212 { 00:08:49.212 "name": "pt3", 00:08:49.212 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:49.212 "is_configured": true, 00:08:49.212 "data_offset": 2048, 00:08:49.212 "data_size": 63488 00:08:49.212 } 00:08:49.212 ] 00:08:49.212 } 00:08:49.212 } 00:08:49.212 }' 00:08:49.212 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:49.212 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:49.212 pt2 00:08:49.212 pt3' 00:08:49.212 04:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.212 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:49.212 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.212 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:49.212 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.212 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.212 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.212 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.212 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.212 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.212 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.212 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:49.212 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.212 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.212 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.212 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.212 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.212 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.212 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.212 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:49.212 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.212 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.212 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.212 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:49.472 [2024-11-21 04:06:49.198909] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1d242b06-960a-429b-aaf8-cb5c8618f176 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1d242b06-960a-429b-aaf8-cb5c8618f176 ']' 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.472 [2024-11-21 04:06:49.254532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:49.472 [2024-11-21 04:06:49.254565] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:49.472 [2024-11-21 04:06:49.254651] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.472 [2024-11-21 04:06:49.254719] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:49.472 [2024-11-21 04:06:49.254732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.472 [2024-11-21 04:06:49.406318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:49.472 [2024-11-21 04:06:49.408626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:49.472 [2024-11-21 04:06:49.408726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:49.472 [2024-11-21 04:06:49.408832] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:49.472 [2024-11-21 04:06:49.408944] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:49.472 [2024-11-21 04:06:49.409059] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:49.472 [2024-11-21 04:06:49.409121] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:49.472 [2024-11-21 04:06:49.409186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:08:49.472 request: 00:08:49.472 { 00:08:49.472 "name": "raid_bdev1", 00:08:49.472 "raid_level": "raid0", 00:08:49.472 "base_bdevs": [ 00:08:49.472 "malloc1", 00:08:49.472 "malloc2", 00:08:49.472 "malloc3" 00:08:49.472 ], 00:08:49.472 "strip_size_kb": 64, 00:08:49.472 "superblock": false, 00:08:49.472 "method": "bdev_raid_create", 00:08:49.472 "req_id": 1 00:08:49.472 } 00:08:49.472 Got JSON-RPC error response 00:08:49.472 response: 00:08:49.472 { 00:08:49.472 "code": -17, 00:08:49.472 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:49.472 } 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.472 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.733 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:49.733 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:49.733 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:49.733 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.733 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.733 [2024-11-21 04:06:49.470154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:49.733 [2024-11-21 04:06:49.470270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.733 [2024-11-21 04:06:49.470309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:49.733 [2024-11-21 04:06:49.470365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.733 [2024-11-21 04:06:49.472910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.733 [2024-11-21 04:06:49.472989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:49.733 [2024-11-21 04:06:49.473089] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:49.733 [2024-11-21 04:06:49.473202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:49.733 pt1 00:08:49.733 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.733 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:49.733 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.733 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.733 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.733 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.733 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.733 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.733 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.733 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.733 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.733 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.733 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.733 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.733 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.733 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.733 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.733 "name": "raid_bdev1", 00:08:49.733 "uuid": "1d242b06-960a-429b-aaf8-cb5c8618f176", 00:08:49.733 "strip_size_kb": 64, 00:08:49.733 "state": "configuring", 00:08:49.733 "raid_level": "raid0", 00:08:49.733 "superblock": true, 00:08:49.733 "num_base_bdevs": 3, 00:08:49.733 "num_base_bdevs_discovered": 1, 00:08:49.733 "num_base_bdevs_operational": 3, 00:08:49.733 "base_bdevs_list": [ 00:08:49.733 { 00:08:49.733 "name": "pt1", 00:08:49.733 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:49.733 "is_configured": true, 00:08:49.733 "data_offset": 2048, 00:08:49.733 "data_size": 63488 00:08:49.733 }, 00:08:49.733 { 00:08:49.733 "name": null, 00:08:49.733 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:49.733 "is_configured": false, 00:08:49.733 "data_offset": 2048, 00:08:49.733 "data_size": 63488 00:08:49.733 }, 00:08:49.733 { 00:08:49.733 "name": null, 00:08:49.733 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:49.733 "is_configured": false, 00:08:49.733 "data_offset": 2048, 00:08:49.733 "data_size": 63488 00:08:49.733 } 00:08:49.733 ] 00:08:49.733 }' 00:08:49.733 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.733 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.000 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:50.000 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:50.000 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.000 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.000 [2024-11-21 04:06:49.917403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:50.000 [2024-11-21 04:06:49.917557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.000 [2024-11-21 04:06:49.917590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:50.000 [2024-11-21 04:06:49.917607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.000 [2024-11-21 04:06:49.918063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.000 [2024-11-21 04:06:49.918084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:50.000 [2024-11-21 04:06:49.918160] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:50.000 [2024-11-21 04:06:49.918186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:50.000 pt2 00:08:50.000 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.000 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:50.000 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.000 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.000 [2024-11-21 04:06:49.929399] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:50.000 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.000 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:50.000 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.000 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.000 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.000 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.000 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.000 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.000 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.000 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.000 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.000 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.000 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.000 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.000 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.000 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.297 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.297 "name": "raid_bdev1", 00:08:50.297 "uuid": "1d242b06-960a-429b-aaf8-cb5c8618f176", 00:08:50.297 "strip_size_kb": 64, 00:08:50.297 "state": "configuring", 00:08:50.297 "raid_level": "raid0", 00:08:50.297 "superblock": true, 00:08:50.297 "num_base_bdevs": 3, 00:08:50.297 "num_base_bdevs_discovered": 1, 00:08:50.297 "num_base_bdevs_operational": 3, 00:08:50.297 "base_bdevs_list": [ 00:08:50.297 { 00:08:50.297 "name": "pt1", 00:08:50.297 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:50.297 "is_configured": true, 00:08:50.297 "data_offset": 2048, 00:08:50.297 "data_size": 63488 00:08:50.297 }, 00:08:50.297 { 00:08:50.297 "name": null, 00:08:50.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:50.297 "is_configured": false, 00:08:50.297 "data_offset": 0, 00:08:50.297 "data_size": 63488 00:08:50.297 }, 00:08:50.297 { 00:08:50.297 "name": null, 00:08:50.297 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:50.297 "is_configured": false, 00:08:50.297 "data_offset": 2048, 00:08:50.297 "data_size": 63488 00:08:50.297 } 00:08:50.297 ] 00:08:50.297 }' 00:08:50.297 04:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.297 04:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.558 [2024-11-21 04:06:50.336640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:50.558 [2024-11-21 04:06:50.336745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.558 [2024-11-21 04:06:50.336803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:50.558 [2024-11-21 04:06:50.336845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.558 [2024-11-21 04:06:50.337300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.558 [2024-11-21 04:06:50.337364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:50.558 [2024-11-21 04:06:50.337489] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:50.558 [2024-11-21 04:06:50.337553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:50.558 pt2 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.558 [2024-11-21 04:06:50.348628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:50.558 [2024-11-21 04:06:50.348707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.558 [2024-11-21 04:06:50.348761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:50.558 [2024-11-21 04:06:50.348791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.558 [2024-11-21 04:06:50.349196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.558 [2024-11-21 04:06:50.349266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:50.558 [2024-11-21 04:06:50.349374] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:50.558 [2024-11-21 04:06:50.349426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:50.558 [2024-11-21 04:06:50.349574] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:50.558 [2024-11-21 04:06:50.349614] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:50.558 [2024-11-21 04:06:50.349918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:50.558 [2024-11-21 04:06:50.350088] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:50.558 [2024-11-21 04:06:50.350135] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:50.558 [2024-11-21 04:06:50.350320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.558 pt3 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.558 "name": "raid_bdev1", 00:08:50.558 "uuid": "1d242b06-960a-429b-aaf8-cb5c8618f176", 00:08:50.558 "strip_size_kb": 64, 00:08:50.558 "state": "online", 00:08:50.558 "raid_level": "raid0", 00:08:50.558 "superblock": true, 00:08:50.558 "num_base_bdevs": 3, 00:08:50.558 "num_base_bdevs_discovered": 3, 00:08:50.558 "num_base_bdevs_operational": 3, 00:08:50.558 "base_bdevs_list": [ 00:08:50.558 { 00:08:50.558 "name": "pt1", 00:08:50.558 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:50.558 "is_configured": true, 00:08:50.558 "data_offset": 2048, 00:08:50.558 "data_size": 63488 00:08:50.558 }, 00:08:50.558 { 00:08:50.558 "name": "pt2", 00:08:50.558 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:50.558 "is_configured": true, 00:08:50.558 "data_offset": 2048, 00:08:50.558 "data_size": 63488 00:08:50.558 }, 00:08:50.558 { 00:08:50.558 "name": "pt3", 00:08:50.558 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:50.558 "is_configured": true, 00:08:50.558 "data_offset": 2048, 00:08:50.558 "data_size": 63488 00:08:50.558 } 00:08:50.558 ] 00:08:50.558 }' 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.558 04:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.131 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:51.131 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:51.131 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:51.131 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:51.131 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:51.131 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:51.131 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:51.131 04:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.131 04:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.131 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:51.131 [2024-11-21 04:06:50.812210] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.131 04:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.131 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:51.131 "name": "raid_bdev1", 00:08:51.131 "aliases": [ 00:08:51.131 "1d242b06-960a-429b-aaf8-cb5c8618f176" 00:08:51.131 ], 00:08:51.131 "product_name": "Raid Volume", 00:08:51.131 "block_size": 512, 00:08:51.131 "num_blocks": 190464, 00:08:51.131 "uuid": "1d242b06-960a-429b-aaf8-cb5c8618f176", 00:08:51.131 "assigned_rate_limits": { 00:08:51.131 "rw_ios_per_sec": 0, 00:08:51.131 "rw_mbytes_per_sec": 0, 00:08:51.131 "r_mbytes_per_sec": 0, 00:08:51.131 "w_mbytes_per_sec": 0 00:08:51.131 }, 00:08:51.131 "claimed": false, 00:08:51.131 "zoned": false, 00:08:51.131 "supported_io_types": { 00:08:51.131 "read": true, 00:08:51.131 "write": true, 00:08:51.131 "unmap": true, 00:08:51.131 "flush": true, 00:08:51.131 "reset": true, 00:08:51.131 "nvme_admin": false, 00:08:51.131 "nvme_io": false, 00:08:51.131 "nvme_io_md": false, 00:08:51.131 "write_zeroes": true, 00:08:51.131 "zcopy": false, 00:08:51.131 "get_zone_info": false, 00:08:51.131 "zone_management": false, 00:08:51.131 "zone_append": false, 00:08:51.131 "compare": false, 00:08:51.131 "compare_and_write": false, 00:08:51.131 "abort": false, 00:08:51.131 "seek_hole": false, 00:08:51.131 "seek_data": false, 00:08:51.131 "copy": false, 00:08:51.131 "nvme_iov_md": false 00:08:51.131 }, 00:08:51.131 "memory_domains": [ 00:08:51.131 { 00:08:51.131 "dma_device_id": "system", 00:08:51.131 "dma_device_type": 1 00:08:51.131 }, 00:08:51.131 { 00:08:51.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.131 "dma_device_type": 2 00:08:51.131 }, 00:08:51.131 { 00:08:51.131 "dma_device_id": "system", 00:08:51.131 "dma_device_type": 1 00:08:51.131 }, 00:08:51.131 { 00:08:51.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.131 "dma_device_type": 2 00:08:51.131 }, 00:08:51.131 { 00:08:51.131 "dma_device_id": "system", 00:08:51.131 "dma_device_type": 1 00:08:51.131 }, 00:08:51.131 { 00:08:51.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.131 "dma_device_type": 2 00:08:51.131 } 00:08:51.131 ], 00:08:51.131 "driver_specific": { 00:08:51.131 "raid": { 00:08:51.131 "uuid": "1d242b06-960a-429b-aaf8-cb5c8618f176", 00:08:51.131 "strip_size_kb": 64, 00:08:51.131 "state": "online", 00:08:51.131 "raid_level": "raid0", 00:08:51.131 "superblock": true, 00:08:51.131 "num_base_bdevs": 3, 00:08:51.131 "num_base_bdevs_discovered": 3, 00:08:51.131 "num_base_bdevs_operational": 3, 00:08:51.131 "base_bdevs_list": [ 00:08:51.131 { 00:08:51.131 "name": "pt1", 00:08:51.131 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:51.131 "is_configured": true, 00:08:51.131 "data_offset": 2048, 00:08:51.131 "data_size": 63488 00:08:51.131 }, 00:08:51.131 { 00:08:51.131 "name": "pt2", 00:08:51.131 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:51.131 "is_configured": true, 00:08:51.131 "data_offset": 2048, 00:08:51.131 "data_size": 63488 00:08:51.131 }, 00:08:51.131 { 00:08:51.131 "name": "pt3", 00:08:51.131 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:51.131 "is_configured": true, 00:08:51.131 "data_offset": 2048, 00:08:51.131 "data_size": 63488 00:08:51.131 } 00:08:51.131 ] 00:08:51.131 } 00:08:51.131 } 00:08:51.131 }' 00:08:51.131 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:51.131 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:51.131 pt2 00:08:51.131 pt3' 00:08:51.131 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.131 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:51.131 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.131 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:51.131 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.131 04:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.131 04:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.132 04:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.132 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.132 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.132 04:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.132 04:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:51.132 04:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.132 04:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.132 04:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.132 04:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.132 04:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.132 04:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.132 04:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.132 04:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.132 04:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:51.132 04:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.132 04:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.132 04:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.132 04:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.132 04:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.132 04:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:51.132 04:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.132 04:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:51.132 04:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.132 [2024-11-21 04:06:51.091644] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.392 04:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.392 04:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1d242b06-960a-429b-aaf8-cb5c8618f176 '!=' 1d242b06-960a-429b-aaf8-cb5c8618f176 ']' 00:08:51.392 04:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:51.392 04:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:51.392 04:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:51.392 04:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76248 00:08:51.392 04:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 76248 ']' 00:08:51.392 04:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 76248 00:08:51.392 04:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:51.392 04:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.392 04:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76248 00:08:51.392 killing process with pid 76248 00:08:51.392 04:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:51.392 04:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:51.392 04:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76248' 00:08:51.392 04:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 76248 00:08:51.392 [2024-11-21 04:06:51.177123] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:51.392 [2024-11-21 04:06:51.177213] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.392 [2024-11-21 04:06:51.177310] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.392 [2024-11-21 04:06:51.177319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:51.392 04:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 76248 00:08:51.392 [2024-11-21 04:06:51.238366] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:51.652 04:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:51.653 00:08:51.653 real 0m4.192s 00:08:51.653 user 0m6.446s 00:08:51.653 sys 0m0.984s 00:08:51.653 04:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.653 ************************************ 00:08:51.653 END TEST raid_superblock_test 00:08:51.653 ************************************ 00:08:51.653 04:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.653 04:06:51 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:51.653 04:06:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:51.653 04:06:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.653 04:06:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:51.913 ************************************ 00:08:51.913 START TEST raid_read_error_test 00:08:51.913 ************************************ 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.sycUwgg1o2 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76490 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76490 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 76490 ']' 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.913 04:06:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.913 [2024-11-21 04:06:51.732442] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:51.913 [2024-11-21 04:06:51.732654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76490 ] 00:08:52.173 [2024-11-21 04:06:51.888895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.173 [2024-11-21 04:06:51.927542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.173 [2024-11-21 04:06:52.005540] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.173 [2024-11-21 04:06:52.005585] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.746 BaseBdev1_malloc 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.746 true 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.746 [2024-11-21 04:06:52.612101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:52.746 [2024-11-21 04:06:52.612163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.746 [2024-11-21 04:06:52.612187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:52.746 [2024-11-21 04:06:52.612197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.746 [2024-11-21 04:06:52.614820] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.746 [2024-11-21 04:06:52.614860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:52.746 BaseBdev1 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.746 BaseBdev2_malloc 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.746 true 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.746 [2024-11-21 04:06:52.659123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:52.746 [2024-11-21 04:06:52.659179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.746 [2024-11-21 04:06:52.659200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:52.746 [2024-11-21 04:06:52.659232] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.746 [2024-11-21 04:06:52.661743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.746 [2024-11-21 04:06:52.661787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:52.746 BaseBdev2 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.746 BaseBdev3_malloc 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.746 true 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.746 [2024-11-21 04:06:52.705886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:52.746 [2024-11-21 04:06:52.705939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.746 [2024-11-21 04:06:52.705958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:52.746 [2024-11-21 04:06:52.705968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.746 [2024-11-21 04:06:52.708382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.746 [2024-11-21 04:06:52.708419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:52.746 BaseBdev3 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.746 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.007 [2024-11-21 04:06:52.717948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.007 [2024-11-21 04:06:52.720206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.007 [2024-11-21 04:06:52.720299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:53.007 [2024-11-21 04:06:52.720510] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:53.007 [2024-11-21 04:06:52.720530] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:53.007 [2024-11-21 04:06:52.720870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:08:53.007 [2024-11-21 04:06:52.721027] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:53.007 [2024-11-21 04:06:52.721037] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:53.007 [2024-11-21 04:06:52.721184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.007 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.007 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:53.007 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.007 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.007 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.007 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.007 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.007 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.007 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.007 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.007 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.007 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.007 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.007 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.007 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.007 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.007 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.007 "name": "raid_bdev1", 00:08:53.007 "uuid": "0ff9251f-41e3-481f-bc02-8909bcb26744", 00:08:53.007 "strip_size_kb": 64, 00:08:53.007 "state": "online", 00:08:53.007 "raid_level": "raid0", 00:08:53.007 "superblock": true, 00:08:53.007 "num_base_bdevs": 3, 00:08:53.007 "num_base_bdevs_discovered": 3, 00:08:53.007 "num_base_bdevs_operational": 3, 00:08:53.007 "base_bdevs_list": [ 00:08:53.007 { 00:08:53.007 "name": "BaseBdev1", 00:08:53.007 "uuid": "2a0056ab-4c67-52af-85fa-b1c5728048c5", 00:08:53.007 "is_configured": true, 00:08:53.007 "data_offset": 2048, 00:08:53.007 "data_size": 63488 00:08:53.007 }, 00:08:53.007 { 00:08:53.007 "name": "BaseBdev2", 00:08:53.007 "uuid": "7719e9bc-a171-53cc-9b05-0f2988b5cb1b", 00:08:53.007 "is_configured": true, 00:08:53.007 "data_offset": 2048, 00:08:53.007 "data_size": 63488 00:08:53.007 }, 00:08:53.007 { 00:08:53.007 "name": "BaseBdev3", 00:08:53.007 "uuid": "a7b78465-f042-5792-8b04-06ecb1444a92", 00:08:53.007 "is_configured": true, 00:08:53.007 "data_offset": 2048, 00:08:53.007 "data_size": 63488 00:08:53.007 } 00:08:53.007 ] 00:08:53.007 }' 00:08:53.007 04:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.007 04:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.266 04:06:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:53.266 04:06:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:53.525 [2024-11-21 04:06:53.273547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002d50 00:08:54.465 04:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:54.465 04:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.465 04:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.465 04:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.465 04:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:54.465 04:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:54.465 04:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:54.465 04:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:54.465 04:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.465 04:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.465 04:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.465 04:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.465 04:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.465 04:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.465 04:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.465 04:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.465 04:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.465 04:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.465 04:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.465 04:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.465 04:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.465 04:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.465 04:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.465 "name": "raid_bdev1", 00:08:54.465 "uuid": "0ff9251f-41e3-481f-bc02-8909bcb26744", 00:08:54.465 "strip_size_kb": 64, 00:08:54.465 "state": "online", 00:08:54.465 "raid_level": "raid0", 00:08:54.465 "superblock": true, 00:08:54.465 "num_base_bdevs": 3, 00:08:54.465 "num_base_bdevs_discovered": 3, 00:08:54.465 "num_base_bdevs_operational": 3, 00:08:54.465 "base_bdevs_list": [ 00:08:54.465 { 00:08:54.465 "name": "BaseBdev1", 00:08:54.465 "uuid": "2a0056ab-4c67-52af-85fa-b1c5728048c5", 00:08:54.465 "is_configured": true, 00:08:54.465 "data_offset": 2048, 00:08:54.465 "data_size": 63488 00:08:54.465 }, 00:08:54.465 { 00:08:54.465 "name": "BaseBdev2", 00:08:54.465 "uuid": "7719e9bc-a171-53cc-9b05-0f2988b5cb1b", 00:08:54.465 "is_configured": true, 00:08:54.465 "data_offset": 2048, 00:08:54.465 "data_size": 63488 00:08:54.465 }, 00:08:54.465 { 00:08:54.465 "name": "BaseBdev3", 00:08:54.465 "uuid": "a7b78465-f042-5792-8b04-06ecb1444a92", 00:08:54.465 "is_configured": true, 00:08:54.465 "data_offset": 2048, 00:08:54.465 "data_size": 63488 00:08:54.465 } 00:08:54.465 ] 00:08:54.465 }' 00:08:54.465 04:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.465 04:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.726 04:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:54.726 04:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.726 04:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.726 [2024-11-21 04:06:54.610112] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:54.726 [2024-11-21 04:06:54.610254] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.726 [2024-11-21 04:06:54.612916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.726 [2024-11-21 04:06:54.613058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.726 [2024-11-21 04:06:54.613154] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.726 [2024-11-21 04:06:54.613254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:54.726 { 00:08:54.726 "results": [ 00:08:54.726 { 00:08:54.726 "job": "raid_bdev1", 00:08:54.726 "core_mask": "0x1", 00:08:54.726 "workload": "randrw", 00:08:54.726 "percentage": 50, 00:08:54.726 "status": "finished", 00:08:54.726 "queue_depth": 1, 00:08:54.726 "io_size": 131072, 00:08:54.726 "runtime": 1.337106, 00:08:54.726 "iops": 14324.219620583559, 00:08:54.726 "mibps": 1790.5274525729449, 00:08:54.726 "io_failed": 1, 00:08:54.726 "io_timeout": 0, 00:08:54.726 "avg_latency_us": 98.02728790274006, 00:08:54.726 "min_latency_us": 26.270742358078603, 00:08:54.726 "max_latency_us": 1352.216593886463 00:08:54.726 } 00:08:54.726 ], 00:08:54.726 "core_count": 1 00:08:54.726 } 00:08:54.726 04:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.726 04:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76490 00:08:54.726 04:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 76490 ']' 00:08:54.726 04:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 76490 00:08:54.726 04:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:54.726 04:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.726 04:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76490 00:08:54.726 04:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.726 04:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.726 04:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76490' 00:08:54.726 killing process with pid 76490 00:08:54.726 04:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 76490 00:08:54.726 [2024-11-21 04:06:54.651744] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:54.726 04:06:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 76490 00:08:54.987 [2024-11-21 04:06:54.701931] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:55.247 04:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.sycUwgg1o2 00:08:55.247 04:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:55.247 04:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:55.247 04:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:08:55.247 04:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:55.248 04:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:55.248 04:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:55.248 04:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:08:55.248 00:08:55.248 real 0m3.414s 00:08:55.248 user 0m4.152s 00:08:55.248 sys 0m0.657s 00:08:55.248 04:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.248 04:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.248 ************************************ 00:08:55.248 END TEST raid_read_error_test 00:08:55.248 ************************************ 00:08:55.248 04:06:55 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:55.248 04:06:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:55.248 04:06:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.248 04:06:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:55.248 ************************************ 00:08:55.248 START TEST raid_write_error_test 00:08:55.248 ************************************ 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LFSrXo3Bxi 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76625 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76625 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 76625 ']' 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.248 04:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.510 [2024-11-21 04:06:55.222802] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:55.510 [2024-11-21 04:06:55.223075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76625 ] 00:08:55.510 [2024-11-21 04:06:55.378100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.510 [2024-11-21 04:06:55.419514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.770 [2024-11-21 04:06:55.497369] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.770 [2024-11-21 04:06:55.497519] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.344 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.344 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:56.344 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:56.344 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:56.344 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.344 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.344 BaseBdev1_malloc 00:08:56.344 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.344 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:56.344 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.344 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.344 true 00:08:56.344 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.344 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:56.344 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.344 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.344 [2024-11-21 04:06:56.092440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:56.344 [2024-11-21 04:06:56.092591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.344 [2024-11-21 04:06:56.092621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:56.344 [2024-11-21 04:06:56.092631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.344 [2024-11-21 04:06:56.095153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.344 [2024-11-21 04:06:56.095187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:56.344 BaseBdev1 00:08:56.344 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.344 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:56.344 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:56.344 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.344 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.344 BaseBdev2_malloc 00:08:56.344 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.344 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:56.344 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.344 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.344 true 00:08:56.344 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.344 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:56.344 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.345 [2024-11-21 04:06:56.139232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:56.345 [2024-11-21 04:06:56.139284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.345 [2024-11-21 04:06:56.139303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:56.345 [2024-11-21 04:06:56.139323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.345 [2024-11-21 04:06:56.141803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.345 [2024-11-21 04:06:56.141846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:56.345 BaseBdev2 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.345 BaseBdev3_malloc 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.345 true 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.345 [2024-11-21 04:06:56.186164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:56.345 [2024-11-21 04:06:56.186238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.345 [2024-11-21 04:06:56.186260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:56.345 [2024-11-21 04:06:56.186268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.345 [2024-11-21 04:06:56.188767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.345 [2024-11-21 04:06:56.188806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:56.345 BaseBdev3 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.345 [2024-11-21 04:06:56.198275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:56.345 [2024-11-21 04:06:56.200483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:56.345 [2024-11-21 04:06:56.200612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:56.345 [2024-11-21 04:06:56.200899] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:56.345 [2024-11-21 04:06:56.200956] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:56.345 [2024-11-21 04:06:56.201305] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:08:56.345 [2024-11-21 04:06:56.201506] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:56.345 [2024-11-21 04:06:56.201556] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:56.345 [2024-11-21 04:06:56.201763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.345 "name": "raid_bdev1", 00:08:56.345 "uuid": "1b6b86ea-4af2-4469-9f73-7bfa7110e159", 00:08:56.345 "strip_size_kb": 64, 00:08:56.345 "state": "online", 00:08:56.345 "raid_level": "raid0", 00:08:56.345 "superblock": true, 00:08:56.345 "num_base_bdevs": 3, 00:08:56.345 "num_base_bdevs_discovered": 3, 00:08:56.345 "num_base_bdevs_operational": 3, 00:08:56.345 "base_bdevs_list": [ 00:08:56.345 { 00:08:56.345 "name": "BaseBdev1", 00:08:56.345 "uuid": "a41ca28f-297b-5112-b2c3-4f9845f0efb4", 00:08:56.345 "is_configured": true, 00:08:56.345 "data_offset": 2048, 00:08:56.345 "data_size": 63488 00:08:56.345 }, 00:08:56.345 { 00:08:56.345 "name": "BaseBdev2", 00:08:56.345 "uuid": "7770b5d0-e3d9-5969-8d1a-c79edd2a1cb4", 00:08:56.345 "is_configured": true, 00:08:56.345 "data_offset": 2048, 00:08:56.345 "data_size": 63488 00:08:56.345 }, 00:08:56.345 { 00:08:56.345 "name": "BaseBdev3", 00:08:56.345 "uuid": "bbc85eec-2332-549b-b198-e93fe69b6af2", 00:08:56.345 "is_configured": true, 00:08:56.345 "data_offset": 2048, 00:08:56.345 "data_size": 63488 00:08:56.345 } 00:08:56.345 ] 00:08:56.345 }' 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.345 04:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.915 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:56.915 04:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:56.915 [2024-11-21 04:06:56.709826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002d50 00:08:57.855 04:06:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:57.855 04:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.855 04:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.855 04:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.855 04:06:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:57.855 04:06:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:57.855 04:06:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:57.855 04:06:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:57.855 04:06:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:57.855 04:06:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.855 04:06:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.855 04:06:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.855 04:06:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.855 04:06:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.855 04:06:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.855 04:06:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.855 04:06:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.855 04:06:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.855 04:06:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:57.855 04:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.855 04:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.855 04:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.855 04:06:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.855 "name": "raid_bdev1", 00:08:57.855 "uuid": "1b6b86ea-4af2-4469-9f73-7bfa7110e159", 00:08:57.855 "strip_size_kb": 64, 00:08:57.855 "state": "online", 00:08:57.855 "raid_level": "raid0", 00:08:57.855 "superblock": true, 00:08:57.855 "num_base_bdevs": 3, 00:08:57.855 "num_base_bdevs_discovered": 3, 00:08:57.855 "num_base_bdevs_operational": 3, 00:08:57.855 "base_bdevs_list": [ 00:08:57.855 { 00:08:57.855 "name": "BaseBdev1", 00:08:57.856 "uuid": "a41ca28f-297b-5112-b2c3-4f9845f0efb4", 00:08:57.856 "is_configured": true, 00:08:57.856 "data_offset": 2048, 00:08:57.856 "data_size": 63488 00:08:57.856 }, 00:08:57.856 { 00:08:57.856 "name": "BaseBdev2", 00:08:57.856 "uuid": "7770b5d0-e3d9-5969-8d1a-c79edd2a1cb4", 00:08:57.856 "is_configured": true, 00:08:57.856 "data_offset": 2048, 00:08:57.856 "data_size": 63488 00:08:57.856 }, 00:08:57.856 { 00:08:57.856 "name": "BaseBdev3", 00:08:57.856 "uuid": "bbc85eec-2332-549b-b198-e93fe69b6af2", 00:08:57.856 "is_configured": true, 00:08:57.856 "data_offset": 2048, 00:08:57.856 "data_size": 63488 00:08:57.856 } 00:08:57.856 ] 00:08:57.856 }' 00:08:57.856 04:06:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.856 04:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.116 04:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:58.116 04:06:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.116 04:06:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.116 [2024-11-21 04:06:58.034038] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:58.116 [2024-11-21 04:06:58.034151] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:58.116 [2024-11-21 04:06:58.036777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.116 [2024-11-21 04:06:58.036909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.116 [2024-11-21 04:06:58.036999] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:58.116 [2024-11-21 04:06:58.037080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:58.116 { 00:08:58.116 "results": [ 00:08:58.116 { 00:08:58.116 "job": "raid_bdev1", 00:08:58.116 "core_mask": "0x1", 00:08:58.116 "workload": "randrw", 00:08:58.116 "percentage": 50, 00:08:58.116 "status": "finished", 00:08:58.116 "queue_depth": 1, 00:08:58.116 "io_size": 131072, 00:08:58.116 "runtime": 1.32474, 00:08:58.116 "iops": 14513.03652037381, 00:08:58.116 "mibps": 1814.1295650467262, 00:08:58.116 "io_failed": 1, 00:08:58.116 "io_timeout": 0, 00:08:58.116 "avg_latency_us": 96.63838756588432, 00:08:58.116 "min_latency_us": 25.041048034934498, 00:08:58.116 "max_latency_us": 1359.3711790393013 00:08:58.116 } 00:08:58.116 ], 00:08:58.116 "core_count": 1 00:08:58.116 } 00:08:58.116 04:06:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.116 04:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76625 00:08:58.116 04:06:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 76625 ']' 00:08:58.116 04:06:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 76625 00:08:58.116 04:06:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:58.116 04:06:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.116 04:06:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76625 00:08:58.116 04:06:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.116 04:06:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.116 04:06:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76625' 00:08:58.116 killing process with pid 76625 00:08:58.116 04:06:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 76625 00:08:58.116 [2024-11-21 04:06:58.084900] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:58.116 04:06:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 76625 00:08:58.376 [2024-11-21 04:06:58.133763] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:58.636 04:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LFSrXo3Bxi 00:08:58.636 04:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:58.636 04:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:58.636 04:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:08:58.636 04:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:58.636 04:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:58.636 04:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:58.636 04:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:08:58.636 00:08:58.636 real 0m3.360s 00:08:58.636 user 0m4.095s 00:08:58.636 sys 0m0.640s 00:08:58.636 ************************************ 00:08:58.636 END TEST raid_write_error_test 00:08:58.636 ************************************ 00:08:58.636 04:06:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.636 04:06:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.636 04:06:58 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:58.636 04:06:58 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:58.636 04:06:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:58.636 04:06:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.636 04:06:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:58.636 ************************************ 00:08:58.636 START TEST raid_state_function_test 00:08:58.636 ************************************ 00:08:58.636 04:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:08:58.636 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:58.636 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:58.636 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:58.636 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:58.636 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:58.636 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:58.636 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:58.636 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76752 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76752' 00:08:58.637 Process raid pid: 76752 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76752 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 76752 ']' 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:58.637 04:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.897 [2024-11-21 04:06:58.640374] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:58.897 [2024-11-21 04:06:58.640609] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.897 [2024-11-21 04:06:58.797442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.897 [2024-11-21 04:06:58.838003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.157 [2024-11-21 04:06:58.915768] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.157 [2024-11-21 04:06:58.915941] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.725 04:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.725 04:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:59.725 04:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:59.725 04:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.725 04:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.725 [2024-11-21 04:06:59.483544] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:59.725 [2024-11-21 04:06:59.483617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:59.725 [2024-11-21 04:06:59.483636] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:59.725 [2024-11-21 04:06:59.483648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:59.725 [2024-11-21 04:06:59.483655] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:59.725 [2024-11-21 04:06:59.483668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:59.725 04:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.725 04:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:59.725 04:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.725 04:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.725 04:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.725 04:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.725 04:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.725 04:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.725 04:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.725 04:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.725 04:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.725 04:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.725 04:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.725 04:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.725 04:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.725 04:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.725 04:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.725 "name": "Existed_Raid", 00:08:59.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.725 "strip_size_kb": 64, 00:08:59.725 "state": "configuring", 00:08:59.725 "raid_level": "concat", 00:08:59.725 "superblock": false, 00:08:59.725 "num_base_bdevs": 3, 00:08:59.725 "num_base_bdevs_discovered": 0, 00:08:59.725 "num_base_bdevs_operational": 3, 00:08:59.725 "base_bdevs_list": [ 00:08:59.725 { 00:08:59.725 "name": "BaseBdev1", 00:08:59.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.725 "is_configured": false, 00:08:59.725 "data_offset": 0, 00:08:59.725 "data_size": 0 00:08:59.725 }, 00:08:59.725 { 00:08:59.725 "name": "BaseBdev2", 00:08:59.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.725 "is_configured": false, 00:08:59.725 "data_offset": 0, 00:08:59.725 "data_size": 0 00:08:59.725 }, 00:08:59.725 { 00:08:59.725 "name": "BaseBdev3", 00:08:59.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.725 "is_configured": false, 00:08:59.725 "data_offset": 0, 00:08:59.725 "data_size": 0 00:08:59.725 } 00:08:59.725 ] 00:08:59.725 }' 00:08:59.725 04:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.725 04:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.985 04:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:59.985 04:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.985 04:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.246 [2024-11-21 04:06:59.958579] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:00.246 [2024-11-21 04:06:59.958696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:00.246 04:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.246 04:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:00.246 04:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.246 04:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.246 [2024-11-21 04:06:59.970559] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:00.246 [2024-11-21 04:06:59.970609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:00.246 [2024-11-21 04:06:59.970618] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:00.246 [2024-11-21 04:06:59.970628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:00.246 [2024-11-21 04:06:59.970634] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:00.246 [2024-11-21 04:06:59.970644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:00.246 04:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.246 04:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:00.246 04:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.246 04:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.246 [2024-11-21 04:06:59.997711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:00.246 BaseBdev1 00:09:00.246 04:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.246 04:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:00.246 04:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:00.246 04:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.246 04:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:00.246 04:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.246 04:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.246 04:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.246 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.246 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.246 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.246 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:00.246 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.246 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.246 [ 00:09:00.246 { 00:09:00.246 "name": "BaseBdev1", 00:09:00.246 "aliases": [ 00:09:00.246 "4f451193-4297-467b-9b6f-b73e6cd8b0ad" 00:09:00.246 ], 00:09:00.246 "product_name": "Malloc disk", 00:09:00.246 "block_size": 512, 00:09:00.246 "num_blocks": 65536, 00:09:00.246 "uuid": "4f451193-4297-467b-9b6f-b73e6cd8b0ad", 00:09:00.246 "assigned_rate_limits": { 00:09:00.246 "rw_ios_per_sec": 0, 00:09:00.246 "rw_mbytes_per_sec": 0, 00:09:00.246 "r_mbytes_per_sec": 0, 00:09:00.246 "w_mbytes_per_sec": 0 00:09:00.246 }, 00:09:00.246 "claimed": true, 00:09:00.246 "claim_type": "exclusive_write", 00:09:00.246 "zoned": false, 00:09:00.246 "supported_io_types": { 00:09:00.246 "read": true, 00:09:00.246 "write": true, 00:09:00.246 "unmap": true, 00:09:00.246 "flush": true, 00:09:00.246 "reset": true, 00:09:00.246 "nvme_admin": false, 00:09:00.246 "nvme_io": false, 00:09:00.246 "nvme_io_md": false, 00:09:00.246 "write_zeroes": true, 00:09:00.246 "zcopy": true, 00:09:00.246 "get_zone_info": false, 00:09:00.246 "zone_management": false, 00:09:00.246 "zone_append": false, 00:09:00.246 "compare": false, 00:09:00.246 "compare_and_write": false, 00:09:00.246 "abort": true, 00:09:00.246 "seek_hole": false, 00:09:00.246 "seek_data": false, 00:09:00.246 "copy": true, 00:09:00.246 "nvme_iov_md": false 00:09:00.246 }, 00:09:00.246 "memory_domains": [ 00:09:00.246 { 00:09:00.246 "dma_device_id": "system", 00:09:00.246 "dma_device_type": 1 00:09:00.246 }, 00:09:00.246 { 00:09:00.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.246 "dma_device_type": 2 00:09:00.246 } 00:09:00.246 ], 00:09:00.246 "driver_specific": {} 00:09:00.246 } 00:09:00.246 ] 00:09:00.246 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.246 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:00.246 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:00.246 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.246 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.246 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.246 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.246 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.246 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.246 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.246 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.246 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.246 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.246 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.246 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.246 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.246 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.246 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.246 "name": "Existed_Raid", 00:09:00.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.246 "strip_size_kb": 64, 00:09:00.246 "state": "configuring", 00:09:00.246 "raid_level": "concat", 00:09:00.246 "superblock": false, 00:09:00.246 "num_base_bdevs": 3, 00:09:00.246 "num_base_bdevs_discovered": 1, 00:09:00.246 "num_base_bdevs_operational": 3, 00:09:00.246 "base_bdevs_list": [ 00:09:00.246 { 00:09:00.246 "name": "BaseBdev1", 00:09:00.246 "uuid": "4f451193-4297-467b-9b6f-b73e6cd8b0ad", 00:09:00.246 "is_configured": true, 00:09:00.246 "data_offset": 0, 00:09:00.246 "data_size": 65536 00:09:00.246 }, 00:09:00.246 { 00:09:00.246 "name": "BaseBdev2", 00:09:00.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.246 "is_configured": false, 00:09:00.246 "data_offset": 0, 00:09:00.246 "data_size": 0 00:09:00.246 }, 00:09:00.246 { 00:09:00.246 "name": "BaseBdev3", 00:09:00.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.246 "is_configured": false, 00:09:00.246 "data_offset": 0, 00:09:00.246 "data_size": 0 00:09:00.246 } 00:09:00.246 ] 00:09:00.246 }' 00:09:00.246 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.246 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.507 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:00.507 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.507 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.507 [2024-11-21 04:07:00.437036] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:00.507 [2024-11-21 04:07:00.437104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:00.507 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.507 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:00.507 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.507 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.507 [2024-11-21 04:07:00.445045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:00.507 [2024-11-21 04:07:00.447327] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:00.507 [2024-11-21 04:07:00.447370] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:00.507 [2024-11-21 04:07:00.447380] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:00.507 [2024-11-21 04:07:00.447393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:00.507 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.507 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:00.507 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:00.507 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:00.507 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.507 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.507 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.507 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.507 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.507 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.507 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.507 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.507 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.507 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.507 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.507 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.507 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.507 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.767 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.767 "name": "Existed_Raid", 00:09:00.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.767 "strip_size_kb": 64, 00:09:00.767 "state": "configuring", 00:09:00.767 "raid_level": "concat", 00:09:00.767 "superblock": false, 00:09:00.767 "num_base_bdevs": 3, 00:09:00.767 "num_base_bdevs_discovered": 1, 00:09:00.767 "num_base_bdevs_operational": 3, 00:09:00.767 "base_bdevs_list": [ 00:09:00.767 { 00:09:00.767 "name": "BaseBdev1", 00:09:00.767 "uuid": "4f451193-4297-467b-9b6f-b73e6cd8b0ad", 00:09:00.767 "is_configured": true, 00:09:00.767 "data_offset": 0, 00:09:00.767 "data_size": 65536 00:09:00.767 }, 00:09:00.767 { 00:09:00.767 "name": "BaseBdev2", 00:09:00.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.767 "is_configured": false, 00:09:00.767 "data_offset": 0, 00:09:00.767 "data_size": 0 00:09:00.767 }, 00:09:00.767 { 00:09:00.767 "name": "BaseBdev3", 00:09:00.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.767 "is_configured": false, 00:09:00.767 "data_offset": 0, 00:09:00.767 "data_size": 0 00:09:00.767 } 00:09:00.767 ] 00:09:00.767 }' 00:09:00.767 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.767 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.028 [2024-11-21 04:07:00.881211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.028 BaseBdev2 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.028 [ 00:09:01.028 { 00:09:01.028 "name": "BaseBdev2", 00:09:01.028 "aliases": [ 00:09:01.028 "f72f1308-d76b-4a91-90a3-dbdb3c350f57" 00:09:01.028 ], 00:09:01.028 "product_name": "Malloc disk", 00:09:01.028 "block_size": 512, 00:09:01.028 "num_blocks": 65536, 00:09:01.028 "uuid": "f72f1308-d76b-4a91-90a3-dbdb3c350f57", 00:09:01.028 "assigned_rate_limits": { 00:09:01.028 "rw_ios_per_sec": 0, 00:09:01.028 "rw_mbytes_per_sec": 0, 00:09:01.028 "r_mbytes_per_sec": 0, 00:09:01.028 "w_mbytes_per_sec": 0 00:09:01.028 }, 00:09:01.028 "claimed": true, 00:09:01.028 "claim_type": "exclusive_write", 00:09:01.028 "zoned": false, 00:09:01.028 "supported_io_types": { 00:09:01.028 "read": true, 00:09:01.028 "write": true, 00:09:01.028 "unmap": true, 00:09:01.028 "flush": true, 00:09:01.028 "reset": true, 00:09:01.028 "nvme_admin": false, 00:09:01.028 "nvme_io": false, 00:09:01.028 "nvme_io_md": false, 00:09:01.028 "write_zeroes": true, 00:09:01.028 "zcopy": true, 00:09:01.028 "get_zone_info": false, 00:09:01.028 "zone_management": false, 00:09:01.028 "zone_append": false, 00:09:01.028 "compare": false, 00:09:01.028 "compare_and_write": false, 00:09:01.028 "abort": true, 00:09:01.028 "seek_hole": false, 00:09:01.028 "seek_data": false, 00:09:01.028 "copy": true, 00:09:01.028 "nvme_iov_md": false 00:09:01.028 }, 00:09:01.028 "memory_domains": [ 00:09:01.028 { 00:09:01.028 "dma_device_id": "system", 00:09:01.028 "dma_device_type": 1 00:09:01.028 }, 00:09:01.028 { 00:09:01.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.028 "dma_device_type": 2 00:09:01.028 } 00:09:01.028 ], 00:09:01.028 "driver_specific": {} 00:09:01.028 } 00:09:01.028 ] 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.028 "name": "Existed_Raid", 00:09:01.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.028 "strip_size_kb": 64, 00:09:01.028 "state": "configuring", 00:09:01.028 "raid_level": "concat", 00:09:01.028 "superblock": false, 00:09:01.028 "num_base_bdevs": 3, 00:09:01.028 "num_base_bdevs_discovered": 2, 00:09:01.028 "num_base_bdevs_operational": 3, 00:09:01.028 "base_bdevs_list": [ 00:09:01.028 { 00:09:01.028 "name": "BaseBdev1", 00:09:01.028 "uuid": "4f451193-4297-467b-9b6f-b73e6cd8b0ad", 00:09:01.028 "is_configured": true, 00:09:01.028 "data_offset": 0, 00:09:01.028 "data_size": 65536 00:09:01.028 }, 00:09:01.028 { 00:09:01.028 "name": "BaseBdev2", 00:09:01.028 "uuid": "f72f1308-d76b-4a91-90a3-dbdb3c350f57", 00:09:01.028 "is_configured": true, 00:09:01.028 "data_offset": 0, 00:09:01.028 "data_size": 65536 00:09:01.028 }, 00:09:01.028 { 00:09:01.028 "name": "BaseBdev3", 00:09:01.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.028 "is_configured": false, 00:09:01.028 "data_offset": 0, 00:09:01.028 "data_size": 0 00:09:01.028 } 00:09:01.028 ] 00:09:01.028 }' 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.028 04:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.598 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:01.598 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.598 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.598 [2024-11-21 04:07:01.380637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:01.598 [2024-11-21 04:07:01.380693] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:01.599 [2024-11-21 04:07:01.380719] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:01.599 [2024-11-21 04:07:01.381111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:01.599 [2024-11-21 04:07:01.381316] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:01.599 [2024-11-21 04:07:01.381333] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:01.599 [2024-11-21 04:07:01.381686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.599 BaseBdev3 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.599 [ 00:09:01.599 { 00:09:01.599 "name": "BaseBdev3", 00:09:01.599 "aliases": [ 00:09:01.599 "980e8fd2-2261-4252-9038-39790bedd6b1" 00:09:01.599 ], 00:09:01.599 "product_name": "Malloc disk", 00:09:01.599 "block_size": 512, 00:09:01.599 "num_blocks": 65536, 00:09:01.599 "uuid": "980e8fd2-2261-4252-9038-39790bedd6b1", 00:09:01.599 "assigned_rate_limits": { 00:09:01.599 "rw_ios_per_sec": 0, 00:09:01.599 "rw_mbytes_per_sec": 0, 00:09:01.599 "r_mbytes_per_sec": 0, 00:09:01.599 "w_mbytes_per_sec": 0 00:09:01.599 }, 00:09:01.599 "claimed": true, 00:09:01.599 "claim_type": "exclusive_write", 00:09:01.599 "zoned": false, 00:09:01.599 "supported_io_types": { 00:09:01.599 "read": true, 00:09:01.599 "write": true, 00:09:01.599 "unmap": true, 00:09:01.599 "flush": true, 00:09:01.599 "reset": true, 00:09:01.599 "nvme_admin": false, 00:09:01.599 "nvme_io": false, 00:09:01.599 "nvme_io_md": false, 00:09:01.599 "write_zeroes": true, 00:09:01.599 "zcopy": true, 00:09:01.599 "get_zone_info": false, 00:09:01.599 "zone_management": false, 00:09:01.599 "zone_append": false, 00:09:01.599 "compare": false, 00:09:01.599 "compare_and_write": false, 00:09:01.599 "abort": true, 00:09:01.599 "seek_hole": false, 00:09:01.599 "seek_data": false, 00:09:01.599 "copy": true, 00:09:01.599 "nvme_iov_md": false 00:09:01.599 }, 00:09:01.599 "memory_domains": [ 00:09:01.599 { 00:09:01.599 "dma_device_id": "system", 00:09:01.599 "dma_device_type": 1 00:09:01.599 }, 00:09:01.599 { 00:09:01.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.599 "dma_device_type": 2 00:09:01.599 } 00:09:01.599 ], 00:09:01.599 "driver_specific": {} 00:09:01.599 } 00:09:01.599 ] 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.599 "name": "Existed_Raid", 00:09:01.599 "uuid": "dfbe0f61-1df1-4fdd-948a-b541c91d5a4d", 00:09:01.599 "strip_size_kb": 64, 00:09:01.599 "state": "online", 00:09:01.599 "raid_level": "concat", 00:09:01.599 "superblock": false, 00:09:01.599 "num_base_bdevs": 3, 00:09:01.599 "num_base_bdevs_discovered": 3, 00:09:01.599 "num_base_bdevs_operational": 3, 00:09:01.599 "base_bdevs_list": [ 00:09:01.599 { 00:09:01.599 "name": "BaseBdev1", 00:09:01.599 "uuid": "4f451193-4297-467b-9b6f-b73e6cd8b0ad", 00:09:01.599 "is_configured": true, 00:09:01.599 "data_offset": 0, 00:09:01.599 "data_size": 65536 00:09:01.599 }, 00:09:01.599 { 00:09:01.599 "name": "BaseBdev2", 00:09:01.599 "uuid": "f72f1308-d76b-4a91-90a3-dbdb3c350f57", 00:09:01.599 "is_configured": true, 00:09:01.599 "data_offset": 0, 00:09:01.599 "data_size": 65536 00:09:01.599 }, 00:09:01.599 { 00:09:01.599 "name": "BaseBdev3", 00:09:01.599 "uuid": "980e8fd2-2261-4252-9038-39790bedd6b1", 00:09:01.599 "is_configured": true, 00:09:01.599 "data_offset": 0, 00:09:01.599 "data_size": 65536 00:09:01.599 } 00:09:01.599 ] 00:09:01.599 }' 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.599 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.169 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:02.169 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:02.169 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:02.169 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:02.169 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:02.169 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:02.169 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:02.169 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:02.169 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.169 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.169 [2024-11-21 04:07:01.856168] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:02.169 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.169 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:02.169 "name": "Existed_Raid", 00:09:02.169 "aliases": [ 00:09:02.169 "dfbe0f61-1df1-4fdd-948a-b541c91d5a4d" 00:09:02.169 ], 00:09:02.169 "product_name": "Raid Volume", 00:09:02.169 "block_size": 512, 00:09:02.169 "num_blocks": 196608, 00:09:02.169 "uuid": "dfbe0f61-1df1-4fdd-948a-b541c91d5a4d", 00:09:02.169 "assigned_rate_limits": { 00:09:02.169 "rw_ios_per_sec": 0, 00:09:02.169 "rw_mbytes_per_sec": 0, 00:09:02.169 "r_mbytes_per_sec": 0, 00:09:02.169 "w_mbytes_per_sec": 0 00:09:02.169 }, 00:09:02.169 "claimed": false, 00:09:02.169 "zoned": false, 00:09:02.169 "supported_io_types": { 00:09:02.169 "read": true, 00:09:02.169 "write": true, 00:09:02.169 "unmap": true, 00:09:02.169 "flush": true, 00:09:02.169 "reset": true, 00:09:02.169 "nvme_admin": false, 00:09:02.169 "nvme_io": false, 00:09:02.169 "nvme_io_md": false, 00:09:02.169 "write_zeroes": true, 00:09:02.169 "zcopy": false, 00:09:02.169 "get_zone_info": false, 00:09:02.169 "zone_management": false, 00:09:02.169 "zone_append": false, 00:09:02.169 "compare": false, 00:09:02.169 "compare_and_write": false, 00:09:02.169 "abort": false, 00:09:02.169 "seek_hole": false, 00:09:02.169 "seek_data": false, 00:09:02.169 "copy": false, 00:09:02.169 "nvme_iov_md": false 00:09:02.169 }, 00:09:02.169 "memory_domains": [ 00:09:02.169 { 00:09:02.169 "dma_device_id": "system", 00:09:02.169 "dma_device_type": 1 00:09:02.169 }, 00:09:02.169 { 00:09:02.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.169 "dma_device_type": 2 00:09:02.169 }, 00:09:02.169 { 00:09:02.169 "dma_device_id": "system", 00:09:02.169 "dma_device_type": 1 00:09:02.169 }, 00:09:02.169 { 00:09:02.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.169 "dma_device_type": 2 00:09:02.169 }, 00:09:02.169 { 00:09:02.169 "dma_device_id": "system", 00:09:02.169 "dma_device_type": 1 00:09:02.169 }, 00:09:02.169 { 00:09:02.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.169 "dma_device_type": 2 00:09:02.169 } 00:09:02.169 ], 00:09:02.169 "driver_specific": { 00:09:02.169 "raid": { 00:09:02.169 "uuid": "dfbe0f61-1df1-4fdd-948a-b541c91d5a4d", 00:09:02.169 "strip_size_kb": 64, 00:09:02.169 "state": "online", 00:09:02.169 "raid_level": "concat", 00:09:02.169 "superblock": false, 00:09:02.169 "num_base_bdevs": 3, 00:09:02.169 "num_base_bdevs_discovered": 3, 00:09:02.169 "num_base_bdevs_operational": 3, 00:09:02.169 "base_bdevs_list": [ 00:09:02.169 { 00:09:02.169 "name": "BaseBdev1", 00:09:02.169 "uuid": "4f451193-4297-467b-9b6f-b73e6cd8b0ad", 00:09:02.169 "is_configured": true, 00:09:02.169 "data_offset": 0, 00:09:02.169 "data_size": 65536 00:09:02.169 }, 00:09:02.169 { 00:09:02.169 "name": "BaseBdev2", 00:09:02.169 "uuid": "f72f1308-d76b-4a91-90a3-dbdb3c350f57", 00:09:02.169 "is_configured": true, 00:09:02.169 "data_offset": 0, 00:09:02.169 "data_size": 65536 00:09:02.169 }, 00:09:02.169 { 00:09:02.169 "name": "BaseBdev3", 00:09:02.169 "uuid": "980e8fd2-2261-4252-9038-39790bedd6b1", 00:09:02.169 "is_configured": true, 00:09:02.169 "data_offset": 0, 00:09:02.169 "data_size": 65536 00:09:02.169 } 00:09:02.169 ] 00:09:02.169 } 00:09:02.169 } 00:09:02.169 }' 00:09:02.169 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:02.169 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:02.169 BaseBdev2 00:09:02.169 BaseBdev3' 00:09:02.169 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.169 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:02.169 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.169 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:02.169 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.170 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.170 04:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.170 04:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.170 [2024-11-21 04:07:02.111449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:02.170 [2024-11-21 04:07:02.111524] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:02.170 [2024-11-21 04:07:02.111637] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.170 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.429 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.429 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.429 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.429 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.429 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.429 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.429 "name": "Existed_Raid", 00:09:02.430 "uuid": "dfbe0f61-1df1-4fdd-948a-b541c91d5a4d", 00:09:02.430 "strip_size_kb": 64, 00:09:02.430 "state": "offline", 00:09:02.430 "raid_level": "concat", 00:09:02.430 "superblock": false, 00:09:02.430 "num_base_bdevs": 3, 00:09:02.430 "num_base_bdevs_discovered": 2, 00:09:02.430 "num_base_bdevs_operational": 2, 00:09:02.430 "base_bdevs_list": [ 00:09:02.430 { 00:09:02.430 "name": null, 00:09:02.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.430 "is_configured": false, 00:09:02.430 "data_offset": 0, 00:09:02.430 "data_size": 65536 00:09:02.430 }, 00:09:02.430 { 00:09:02.430 "name": "BaseBdev2", 00:09:02.430 "uuid": "f72f1308-d76b-4a91-90a3-dbdb3c350f57", 00:09:02.430 "is_configured": true, 00:09:02.430 "data_offset": 0, 00:09:02.430 "data_size": 65536 00:09:02.430 }, 00:09:02.430 { 00:09:02.430 "name": "BaseBdev3", 00:09:02.430 "uuid": "980e8fd2-2261-4252-9038-39790bedd6b1", 00:09:02.430 "is_configured": true, 00:09:02.430 "data_offset": 0, 00:09:02.430 "data_size": 65536 00:09:02.430 } 00:09:02.430 ] 00:09:02.430 }' 00:09:02.430 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.430 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.689 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:02.689 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:02.689 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:02.689 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.689 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.689 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.689 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.689 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:02.689 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:02.689 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:02.689 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.689 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.689 [2024-11-21 04:07:02.579813] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:02.689 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.689 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:02.689 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:02.689 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.689 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:02.689 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.689 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.689 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.689 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:02.689 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:02.689 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:02.689 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.689 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.689 [2024-11-21 04:07:02.660329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:02.689 [2024-11-21 04:07:02.660444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.971 BaseBdev2 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.971 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.971 [ 00:09:02.971 { 00:09:02.971 "name": "BaseBdev2", 00:09:02.971 "aliases": [ 00:09:02.971 "fe89a9e4-92a6-4618-ad5c-7f12695a7314" 00:09:02.971 ], 00:09:02.971 "product_name": "Malloc disk", 00:09:02.971 "block_size": 512, 00:09:02.971 "num_blocks": 65536, 00:09:02.971 "uuid": "fe89a9e4-92a6-4618-ad5c-7f12695a7314", 00:09:02.971 "assigned_rate_limits": { 00:09:02.971 "rw_ios_per_sec": 0, 00:09:02.971 "rw_mbytes_per_sec": 0, 00:09:02.971 "r_mbytes_per_sec": 0, 00:09:02.971 "w_mbytes_per_sec": 0 00:09:02.971 }, 00:09:02.971 "claimed": false, 00:09:02.971 "zoned": false, 00:09:02.972 "supported_io_types": { 00:09:02.972 "read": true, 00:09:02.972 "write": true, 00:09:02.972 "unmap": true, 00:09:02.972 "flush": true, 00:09:02.972 "reset": true, 00:09:02.972 "nvme_admin": false, 00:09:02.972 "nvme_io": false, 00:09:02.972 "nvme_io_md": false, 00:09:02.972 "write_zeroes": true, 00:09:02.972 "zcopy": true, 00:09:02.972 "get_zone_info": false, 00:09:02.972 "zone_management": false, 00:09:02.972 "zone_append": false, 00:09:02.972 "compare": false, 00:09:02.972 "compare_and_write": false, 00:09:02.972 "abort": true, 00:09:02.972 "seek_hole": false, 00:09:02.972 "seek_data": false, 00:09:02.972 "copy": true, 00:09:02.972 "nvme_iov_md": false 00:09:02.972 }, 00:09:02.972 "memory_domains": [ 00:09:02.972 { 00:09:02.972 "dma_device_id": "system", 00:09:02.972 "dma_device_type": 1 00:09:02.972 }, 00:09:02.972 { 00:09:02.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.972 "dma_device_type": 2 00:09:02.972 } 00:09:02.972 ], 00:09:02.972 "driver_specific": {} 00:09:02.972 } 00:09:02.972 ] 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.972 BaseBdev3 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.972 [ 00:09:02.972 { 00:09:02.972 "name": "BaseBdev3", 00:09:02.972 "aliases": [ 00:09:02.972 "97c7aba4-4639-475e-8ce2-77a946153caf" 00:09:02.972 ], 00:09:02.972 "product_name": "Malloc disk", 00:09:02.972 "block_size": 512, 00:09:02.972 "num_blocks": 65536, 00:09:02.972 "uuid": "97c7aba4-4639-475e-8ce2-77a946153caf", 00:09:02.972 "assigned_rate_limits": { 00:09:02.972 "rw_ios_per_sec": 0, 00:09:02.972 "rw_mbytes_per_sec": 0, 00:09:02.972 "r_mbytes_per_sec": 0, 00:09:02.972 "w_mbytes_per_sec": 0 00:09:02.972 }, 00:09:02.972 "claimed": false, 00:09:02.972 "zoned": false, 00:09:02.972 "supported_io_types": { 00:09:02.972 "read": true, 00:09:02.972 "write": true, 00:09:02.972 "unmap": true, 00:09:02.972 "flush": true, 00:09:02.972 "reset": true, 00:09:02.972 "nvme_admin": false, 00:09:02.972 "nvme_io": false, 00:09:02.972 "nvme_io_md": false, 00:09:02.972 "write_zeroes": true, 00:09:02.972 "zcopy": true, 00:09:02.972 "get_zone_info": false, 00:09:02.972 "zone_management": false, 00:09:02.972 "zone_append": false, 00:09:02.972 "compare": false, 00:09:02.972 "compare_and_write": false, 00:09:02.972 "abort": true, 00:09:02.972 "seek_hole": false, 00:09:02.972 "seek_data": false, 00:09:02.972 "copy": true, 00:09:02.972 "nvme_iov_md": false 00:09:02.972 }, 00:09:02.972 "memory_domains": [ 00:09:02.972 { 00:09:02.972 "dma_device_id": "system", 00:09:02.972 "dma_device_type": 1 00:09:02.972 }, 00:09:02.972 { 00:09:02.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.972 "dma_device_type": 2 00:09:02.972 } 00:09:02.972 ], 00:09:02.972 "driver_specific": {} 00:09:02.972 } 00:09:02.972 ] 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.972 [2024-11-21 04:07:02.856496] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:02.972 [2024-11-21 04:07:02.856591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:02.972 [2024-11-21 04:07:02.856657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.972 [2024-11-21 04:07:02.858758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.972 "name": "Existed_Raid", 00:09:02.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.972 "strip_size_kb": 64, 00:09:02.972 "state": "configuring", 00:09:02.972 "raid_level": "concat", 00:09:02.972 "superblock": false, 00:09:02.972 "num_base_bdevs": 3, 00:09:02.972 "num_base_bdevs_discovered": 2, 00:09:02.972 "num_base_bdevs_operational": 3, 00:09:02.972 "base_bdevs_list": [ 00:09:02.972 { 00:09:02.972 "name": "BaseBdev1", 00:09:02.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.972 "is_configured": false, 00:09:02.972 "data_offset": 0, 00:09:02.972 "data_size": 0 00:09:02.972 }, 00:09:02.972 { 00:09:02.972 "name": "BaseBdev2", 00:09:02.972 "uuid": "fe89a9e4-92a6-4618-ad5c-7f12695a7314", 00:09:02.972 "is_configured": true, 00:09:02.972 "data_offset": 0, 00:09:02.972 "data_size": 65536 00:09:02.972 }, 00:09:02.972 { 00:09:02.972 "name": "BaseBdev3", 00:09:02.972 "uuid": "97c7aba4-4639-475e-8ce2-77a946153caf", 00:09:02.972 "is_configured": true, 00:09:02.972 "data_offset": 0, 00:09:02.972 "data_size": 65536 00:09:02.972 } 00:09:02.972 ] 00:09:02.972 }' 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.972 04:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.551 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:03.551 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.551 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.551 [2024-11-21 04:07:03.291754] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:03.551 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.551 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:03.551 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.551 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.551 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.551 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.551 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.551 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.551 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.551 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.551 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.551 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.551 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.551 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.551 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.551 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.551 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.551 "name": "Existed_Raid", 00:09:03.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.551 "strip_size_kb": 64, 00:09:03.551 "state": "configuring", 00:09:03.551 "raid_level": "concat", 00:09:03.551 "superblock": false, 00:09:03.551 "num_base_bdevs": 3, 00:09:03.551 "num_base_bdevs_discovered": 1, 00:09:03.551 "num_base_bdevs_operational": 3, 00:09:03.551 "base_bdevs_list": [ 00:09:03.551 { 00:09:03.551 "name": "BaseBdev1", 00:09:03.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.551 "is_configured": false, 00:09:03.551 "data_offset": 0, 00:09:03.551 "data_size": 0 00:09:03.551 }, 00:09:03.551 { 00:09:03.551 "name": null, 00:09:03.551 "uuid": "fe89a9e4-92a6-4618-ad5c-7f12695a7314", 00:09:03.551 "is_configured": false, 00:09:03.551 "data_offset": 0, 00:09:03.551 "data_size": 65536 00:09:03.551 }, 00:09:03.551 { 00:09:03.551 "name": "BaseBdev3", 00:09:03.551 "uuid": "97c7aba4-4639-475e-8ce2-77a946153caf", 00:09:03.551 "is_configured": true, 00:09:03.551 "data_offset": 0, 00:09:03.551 "data_size": 65536 00:09:03.551 } 00:09:03.551 ] 00:09:03.551 }' 00:09:03.551 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.551 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.812 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.812 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:03.812 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.812 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.812 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.812 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:03.812 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:03.812 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.812 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.812 [2024-11-21 04:07:03.775876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.812 BaseBdev1 00:09:03.812 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.812 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:03.812 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:03.812 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:03.812 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:03.812 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:03.812 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:03.812 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:03.812 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.812 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.072 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.072 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:04.072 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.072 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.072 [ 00:09:04.072 { 00:09:04.072 "name": "BaseBdev1", 00:09:04.072 "aliases": [ 00:09:04.072 "a2b0bd41-c648-4ea4-abb1-5df78a5d41b9" 00:09:04.072 ], 00:09:04.072 "product_name": "Malloc disk", 00:09:04.072 "block_size": 512, 00:09:04.072 "num_blocks": 65536, 00:09:04.072 "uuid": "a2b0bd41-c648-4ea4-abb1-5df78a5d41b9", 00:09:04.072 "assigned_rate_limits": { 00:09:04.072 "rw_ios_per_sec": 0, 00:09:04.072 "rw_mbytes_per_sec": 0, 00:09:04.072 "r_mbytes_per_sec": 0, 00:09:04.072 "w_mbytes_per_sec": 0 00:09:04.072 }, 00:09:04.072 "claimed": true, 00:09:04.072 "claim_type": "exclusive_write", 00:09:04.072 "zoned": false, 00:09:04.072 "supported_io_types": { 00:09:04.072 "read": true, 00:09:04.072 "write": true, 00:09:04.072 "unmap": true, 00:09:04.072 "flush": true, 00:09:04.072 "reset": true, 00:09:04.072 "nvme_admin": false, 00:09:04.072 "nvme_io": false, 00:09:04.072 "nvme_io_md": false, 00:09:04.072 "write_zeroes": true, 00:09:04.072 "zcopy": true, 00:09:04.072 "get_zone_info": false, 00:09:04.072 "zone_management": false, 00:09:04.072 "zone_append": false, 00:09:04.072 "compare": false, 00:09:04.072 "compare_and_write": false, 00:09:04.072 "abort": true, 00:09:04.072 "seek_hole": false, 00:09:04.072 "seek_data": false, 00:09:04.072 "copy": true, 00:09:04.072 "nvme_iov_md": false 00:09:04.072 }, 00:09:04.072 "memory_domains": [ 00:09:04.072 { 00:09:04.072 "dma_device_id": "system", 00:09:04.072 "dma_device_type": 1 00:09:04.072 }, 00:09:04.072 { 00:09:04.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.072 "dma_device_type": 2 00:09:04.072 } 00:09:04.072 ], 00:09:04.072 "driver_specific": {} 00:09:04.072 } 00:09:04.072 ] 00:09:04.072 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.072 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:04.072 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:04.072 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.072 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.072 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.072 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.072 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.072 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.072 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.072 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.072 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.073 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.073 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.073 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.073 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.073 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.073 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.073 "name": "Existed_Raid", 00:09:04.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.073 "strip_size_kb": 64, 00:09:04.073 "state": "configuring", 00:09:04.073 "raid_level": "concat", 00:09:04.073 "superblock": false, 00:09:04.073 "num_base_bdevs": 3, 00:09:04.073 "num_base_bdevs_discovered": 2, 00:09:04.073 "num_base_bdevs_operational": 3, 00:09:04.073 "base_bdevs_list": [ 00:09:04.073 { 00:09:04.073 "name": "BaseBdev1", 00:09:04.073 "uuid": "a2b0bd41-c648-4ea4-abb1-5df78a5d41b9", 00:09:04.073 "is_configured": true, 00:09:04.073 "data_offset": 0, 00:09:04.073 "data_size": 65536 00:09:04.073 }, 00:09:04.073 { 00:09:04.073 "name": null, 00:09:04.073 "uuid": "fe89a9e4-92a6-4618-ad5c-7f12695a7314", 00:09:04.073 "is_configured": false, 00:09:04.073 "data_offset": 0, 00:09:04.073 "data_size": 65536 00:09:04.073 }, 00:09:04.073 { 00:09:04.073 "name": "BaseBdev3", 00:09:04.073 "uuid": "97c7aba4-4639-475e-8ce2-77a946153caf", 00:09:04.073 "is_configured": true, 00:09:04.073 "data_offset": 0, 00:09:04.073 "data_size": 65536 00:09:04.073 } 00:09:04.073 ] 00:09:04.073 }' 00:09:04.073 04:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.073 04:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.340 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.340 04:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.340 04:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.340 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:04.340 04:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.599 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:04.599 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:04.599 04:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.599 04:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.599 [2024-11-21 04:07:04.338975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:04.599 04:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.599 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:04.599 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.599 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.599 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.600 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.600 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.600 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.600 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.600 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.600 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.600 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.600 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.600 04:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.600 04:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.600 04:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.600 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.600 "name": "Existed_Raid", 00:09:04.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.600 "strip_size_kb": 64, 00:09:04.600 "state": "configuring", 00:09:04.600 "raid_level": "concat", 00:09:04.600 "superblock": false, 00:09:04.600 "num_base_bdevs": 3, 00:09:04.600 "num_base_bdevs_discovered": 1, 00:09:04.600 "num_base_bdevs_operational": 3, 00:09:04.600 "base_bdevs_list": [ 00:09:04.600 { 00:09:04.600 "name": "BaseBdev1", 00:09:04.600 "uuid": "a2b0bd41-c648-4ea4-abb1-5df78a5d41b9", 00:09:04.600 "is_configured": true, 00:09:04.600 "data_offset": 0, 00:09:04.600 "data_size": 65536 00:09:04.600 }, 00:09:04.600 { 00:09:04.600 "name": null, 00:09:04.600 "uuid": "fe89a9e4-92a6-4618-ad5c-7f12695a7314", 00:09:04.600 "is_configured": false, 00:09:04.600 "data_offset": 0, 00:09:04.600 "data_size": 65536 00:09:04.600 }, 00:09:04.600 { 00:09:04.600 "name": null, 00:09:04.600 "uuid": "97c7aba4-4639-475e-8ce2-77a946153caf", 00:09:04.600 "is_configured": false, 00:09:04.600 "data_offset": 0, 00:09:04.600 "data_size": 65536 00:09:04.600 } 00:09:04.600 ] 00:09:04.600 }' 00:09:04.600 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.600 04:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.859 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.859 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:04.859 04:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.859 04:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.859 04:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.119 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:05.119 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:05.119 04:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.119 04:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.119 [2024-11-21 04:07:04.842255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:05.119 04:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.119 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:05.119 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.119 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.119 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.119 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.119 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.119 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.119 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.119 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.119 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.119 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.119 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.119 04:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.119 04:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.119 04:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.119 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.119 "name": "Existed_Raid", 00:09:05.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.119 "strip_size_kb": 64, 00:09:05.119 "state": "configuring", 00:09:05.119 "raid_level": "concat", 00:09:05.119 "superblock": false, 00:09:05.119 "num_base_bdevs": 3, 00:09:05.119 "num_base_bdevs_discovered": 2, 00:09:05.119 "num_base_bdevs_operational": 3, 00:09:05.119 "base_bdevs_list": [ 00:09:05.119 { 00:09:05.119 "name": "BaseBdev1", 00:09:05.119 "uuid": "a2b0bd41-c648-4ea4-abb1-5df78a5d41b9", 00:09:05.119 "is_configured": true, 00:09:05.119 "data_offset": 0, 00:09:05.119 "data_size": 65536 00:09:05.119 }, 00:09:05.119 { 00:09:05.119 "name": null, 00:09:05.119 "uuid": "fe89a9e4-92a6-4618-ad5c-7f12695a7314", 00:09:05.119 "is_configured": false, 00:09:05.119 "data_offset": 0, 00:09:05.119 "data_size": 65536 00:09:05.119 }, 00:09:05.119 { 00:09:05.119 "name": "BaseBdev3", 00:09:05.119 "uuid": "97c7aba4-4639-475e-8ce2-77a946153caf", 00:09:05.119 "is_configured": true, 00:09:05.119 "data_offset": 0, 00:09:05.119 "data_size": 65536 00:09:05.119 } 00:09:05.119 ] 00:09:05.119 }' 00:09:05.119 04:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.119 04:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.378 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.378 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:05.378 04:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.378 04:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.378 04:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.378 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:05.378 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:05.378 04:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.378 04:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.378 [2024-11-21 04:07:05.321461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:05.378 04:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.378 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:05.378 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.378 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.378 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.378 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.378 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.378 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.378 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.378 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.378 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.378 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.378 04:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.378 04:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.637 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.637 04:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.637 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.637 "name": "Existed_Raid", 00:09:05.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.637 "strip_size_kb": 64, 00:09:05.637 "state": "configuring", 00:09:05.637 "raid_level": "concat", 00:09:05.637 "superblock": false, 00:09:05.637 "num_base_bdevs": 3, 00:09:05.637 "num_base_bdevs_discovered": 1, 00:09:05.637 "num_base_bdevs_operational": 3, 00:09:05.637 "base_bdevs_list": [ 00:09:05.637 { 00:09:05.637 "name": null, 00:09:05.637 "uuid": "a2b0bd41-c648-4ea4-abb1-5df78a5d41b9", 00:09:05.637 "is_configured": false, 00:09:05.637 "data_offset": 0, 00:09:05.637 "data_size": 65536 00:09:05.637 }, 00:09:05.637 { 00:09:05.637 "name": null, 00:09:05.637 "uuid": "fe89a9e4-92a6-4618-ad5c-7f12695a7314", 00:09:05.637 "is_configured": false, 00:09:05.637 "data_offset": 0, 00:09:05.637 "data_size": 65536 00:09:05.637 }, 00:09:05.637 { 00:09:05.637 "name": "BaseBdev3", 00:09:05.637 "uuid": "97c7aba4-4639-475e-8ce2-77a946153caf", 00:09:05.637 "is_configured": true, 00:09:05.637 "data_offset": 0, 00:09:05.637 "data_size": 65536 00:09:05.637 } 00:09:05.637 ] 00:09:05.637 }' 00:09:05.638 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.638 04:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.897 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:05.897 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.897 04:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.897 04:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.897 04:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.897 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:05.897 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:05.897 04:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.897 04:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.897 [2024-11-21 04:07:05.816705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:05.897 04:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.897 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:05.897 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.897 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.897 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.897 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.897 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.897 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.897 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.897 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.897 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.897 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.897 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.897 04:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.897 04:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.897 04:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.156 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.156 "name": "Existed_Raid", 00:09:06.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.156 "strip_size_kb": 64, 00:09:06.156 "state": "configuring", 00:09:06.156 "raid_level": "concat", 00:09:06.156 "superblock": false, 00:09:06.156 "num_base_bdevs": 3, 00:09:06.156 "num_base_bdevs_discovered": 2, 00:09:06.156 "num_base_bdevs_operational": 3, 00:09:06.156 "base_bdevs_list": [ 00:09:06.156 { 00:09:06.156 "name": null, 00:09:06.156 "uuid": "a2b0bd41-c648-4ea4-abb1-5df78a5d41b9", 00:09:06.156 "is_configured": false, 00:09:06.156 "data_offset": 0, 00:09:06.156 "data_size": 65536 00:09:06.156 }, 00:09:06.156 { 00:09:06.156 "name": "BaseBdev2", 00:09:06.156 "uuid": "fe89a9e4-92a6-4618-ad5c-7f12695a7314", 00:09:06.156 "is_configured": true, 00:09:06.156 "data_offset": 0, 00:09:06.156 "data_size": 65536 00:09:06.156 }, 00:09:06.156 { 00:09:06.156 "name": "BaseBdev3", 00:09:06.156 "uuid": "97c7aba4-4639-475e-8ce2-77a946153caf", 00:09:06.156 "is_configured": true, 00:09:06.156 "data_offset": 0, 00:09:06.156 "data_size": 65536 00:09:06.156 } 00:09:06.156 ] 00:09:06.156 }' 00:09:06.156 04:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.156 04:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a2b0bd41-c648-4ea4-abb1-5df78a5d41b9 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.416 [2024-11-21 04:07:06.372875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:06.416 [2024-11-21 04:07:06.372928] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:06.416 [2024-11-21 04:07:06.372938] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:06.416 [2024-11-21 04:07:06.373222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:09:06.416 [2024-11-21 04:07:06.373392] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:06.416 [2024-11-21 04:07:06.373403] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:06.416 [2024-11-21 04:07:06.373643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.416 NewBaseBdev 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.416 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.677 [ 00:09:06.677 { 00:09:06.677 "name": "NewBaseBdev", 00:09:06.677 "aliases": [ 00:09:06.677 "a2b0bd41-c648-4ea4-abb1-5df78a5d41b9" 00:09:06.677 ], 00:09:06.677 "product_name": "Malloc disk", 00:09:06.677 "block_size": 512, 00:09:06.677 "num_blocks": 65536, 00:09:06.677 "uuid": "a2b0bd41-c648-4ea4-abb1-5df78a5d41b9", 00:09:06.677 "assigned_rate_limits": { 00:09:06.677 "rw_ios_per_sec": 0, 00:09:06.677 "rw_mbytes_per_sec": 0, 00:09:06.677 "r_mbytes_per_sec": 0, 00:09:06.677 "w_mbytes_per_sec": 0 00:09:06.677 }, 00:09:06.677 "claimed": true, 00:09:06.677 "claim_type": "exclusive_write", 00:09:06.677 "zoned": false, 00:09:06.677 "supported_io_types": { 00:09:06.677 "read": true, 00:09:06.677 "write": true, 00:09:06.677 "unmap": true, 00:09:06.677 "flush": true, 00:09:06.677 "reset": true, 00:09:06.677 "nvme_admin": false, 00:09:06.677 "nvme_io": false, 00:09:06.677 "nvme_io_md": false, 00:09:06.677 "write_zeroes": true, 00:09:06.677 "zcopy": true, 00:09:06.677 "get_zone_info": false, 00:09:06.677 "zone_management": false, 00:09:06.677 "zone_append": false, 00:09:06.677 "compare": false, 00:09:06.677 "compare_and_write": false, 00:09:06.677 "abort": true, 00:09:06.677 "seek_hole": false, 00:09:06.677 "seek_data": false, 00:09:06.677 "copy": true, 00:09:06.677 "nvme_iov_md": false 00:09:06.677 }, 00:09:06.677 "memory_domains": [ 00:09:06.677 { 00:09:06.677 "dma_device_id": "system", 00:09:06.677 "dma_device_type": 1 00:09:06.677 }, 00:09:06.677 { 00:09:06.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.677 "dma_device_type": 2 00:09:06.677 } 00:09:06.677 ], 00:09:06.677 "driver_specific": {} 00:09:06.677 } 00:09:06.677 ] 00:09:06.677 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.677 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:06.677 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:06.677 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.677 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.677 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.677 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.677 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.677 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.677 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.677 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.677 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.677 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.677 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.677 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.677 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.677 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.677 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.677 "name": "Existed_Raid", 00:09:06.677 "uuid": "ac76fb10-c8e9-4f00-a9e9-f9729307a11d", 00:09:06.677 "strip_size_kb": 64, 00:09:06.677 "state": "online", 00:09:06.677 "raid_level": "concat", 00:09:06.677 "superblock": false, 00:09:06.677 "num_base_bdevs": 3, 00:09:06.677 "num_base_bdevs_discovered": 3, 00:09:06.677 "num_base_bdevs_operational": 3, 00:09:06.677 "base_bdevs_list": [ 00:09:06.677 { 00:09:06.677 "name": "NewBaseBdev", 00:09:06.677 "uuid": "a2b0bd41-c648-4ea4-abb1-5df78a5d41b9", 00:09:06.677 "is_configured": true, 00:09:06.677 "data_offset": 0, 00:09:06.677 "data_size": 65536 00:09:06.677 }, 00:09:06.677 { 00:09:06.677 "name": "BaseBdev2", 00:09:06.677 "uuid": "fe89a9e4-92a6-4618-ad5c-7f12695a7314", 00:09:06.677 "is_configured": true, 00:09:06.677 "data_offset": 0, 00:09:06.677 "data_size": 65536 00:09:06.677 }, 00:09:06.677 { 00:09:06.677 "name": "BaseBdev3", 00:09:06.677 "uuid": "97c7aba4-4639-475e-8ce2-77a946153caf", 00:09:06.677 "is_configured": true, 00:09:06.677 "data_offset": 0, 00:09:06.677 "data_size": 65536 00:09:06.677 } 00:09:06.677 ] 00:09:06.677 }' 00:09:06.677 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.677 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.938 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:06.938 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:06.938 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:06.938 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:06.938 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:06.938 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:06.938 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:06.938 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:06.938 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.938 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.938 [2024-11-21 04:07:06.908350] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:07.198 04:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.198 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:07.198 "name": "Existed_Raid", 00:09:07.198 "aliases": [ 00:09:07.198 "ac76fb10-c8e9-4f00-a9e9-f9729307a11d" 00:09:07.198 ], 00:09:07.198 "product_name": "Raid Volume", 00:09:07.198 "block_size": 512, 00:09:07.198 "num_blocks": 196608, 00:09:07.198 "uuid": "ac76fb10-c8e9-4f00-a9e9-f9729307a11d", 00:09:07.198 "assigned_rate_limits": { 00:09:07.198 "rw_ios_per_sec": 0, 00:09:07.198 "rw_mbytes_per_sec": 0, 00:09:07.198 "r_mbytes_per_sec": 0, 00:09:07.198 "w_mbytes_per_sec": 0 00:09:07.198 }, 00:09:07.198 "claimed": false, 00:09:07.198 "zoned": false, 00:09:07.198 "supported_io_types": { 00:09:07.198 "read": true, 00:09:07.198 "write": true, 00:09:07.198 "unmap": true, 00:09:07.198 "flush": true, 00:09:07.198 "reset": true, 00:09:07.198 "nvme_admin": false, 00:09:07.198 "nvme_io": false, 00:09:07.198 "nvme_io_md": false, 00:09:07.198 "write_zeroes": true, 00:09:07.198 "zcopy": false, 00:09:07.198 "get_zone_info": false, 00:09:07.198 "zone_management": false, 00:09:07.198 "zone_append": false, 00:09:07.198 "compare": false, 00:09:07.198 "compare_and_write": false, 00:09:07.198 "abort": false, 00:09:07.198 "seek_hole": false, 00:09:07.198 "seek_data": false, 00:09:07.198 "copy": false, 00:09:07.198 "nvme_iov_md": false 00:09:07.199 }, 00:09:07.199 "memory_domains": [ 00:09:07.199 { 00:09:07.199 "dma_device_id": "system", 00:09:07.199 "dma_device_type": 1 00:09:07.199 }, 00:09:07.199 { 00:09:07.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.199 "dma_device_type": 2 00:09:07.199 }, 00:09:07.199 { 00:09:07.199 "dma_device_id": "system", 00:09:07.199 "dma_device_type": 1 00:09:07.199 }, 00:09:07.199 { 00:09:07.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.199 "dma_device_type": 2 00:09:07.199 }, 00:09:07.199 { 00:09:07.199 "dma_device_id": "system", 00:09:07.199 "dma_device_type": 1 00:09:07.199 }, 00:09:07.199 { 00:09:07.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.199 "dma_device_type": 2 00:09:07.199 } 00:09:07.199 ], 00:09:07.199 "driver_specific": { 00:09:07.199 "raid": { 00:09:07.199 "uuid": "ac76fb10-c8e9-4f00-a9e9-f9729307a11d", 00:09:07.199 "strip_size_kb": 64, 00:09:07.199 "state": "online", 00:09:07.199 "raid_level": "concat", 00:09:07.199 "superblock": false, 00:09:07.199 "num_base_bdevs": 3, 00:09:07.199 "num_base_bdevs_discovered": 3, 00:09:07.199 "num_base_bdevs_operational": 3, 00:09:07.199 "base_bdevs_list": [ 00:09:07.199 { 00:09:07.199 "name": "NewBaseBdev", 00:09:07.199 "uuid": "a2b0bd41-c648-4ea4-abb1-5df78a5d41b9", 00:09:07.199 "is_configured": true, 00:09:07.199 "data_offset": 0, 00:09:07.199 "data_size": 65536 00:09:07.199 }, 00:09:07.199 { 00:09:07.199 "name": "BaseBdev2", 00:09:07.199 "uuid": "fe89a9e4-92a6-4618-ad5c-7f12695a7314", 00:09:07.199 "is_configured": true, 00:09:07.199 "data_offset": 0, 00:09:07.199 "data_size": 65536 00:09:07.199 }, 00:09:07.199 { 00:09:07.199 "name": "BaseBdev3", 00:09:07.199 "uuid": "97c7aba4-4639-475e-8ce2-77a946153caf", 00:09:07.199 "is_configured": true, 00:09:07.199 "data_offset": 0, 00:09:07.199 "data_size": 65536 00:09:07.199 } 00:09:07.199 ] 00:09:07.199 } 00:09:07.199 } 00:09:07.199 }' 00:09:07.199 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:07.199 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:07.199 BaseBdev2 00:09:07.199 BaseBdev3' 00:09:07.199 04:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.199 04:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:07.199 04:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.199 04:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.199 04:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:07.199 04:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.199 04:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.199 04:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.199 04:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.199 04:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.199 04:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.199 04:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:07.199 04:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.199 04:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.199 04:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.199 04:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.199 04:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.199 04:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.199 04:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.199 04:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:07.199 04:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.199 04:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.199 04:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.199 04:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.459 04:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.459 04:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.459 04:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:07.459 04:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.459 04:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.459 [2024-11-21 04:07:07.183503] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:07.459 [2024-11-21 04:07:07.183532] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.459 [2024-11-21 04:07:07.183617] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.459 [2024-11-21 04:07:07.183684] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.459 [2024-11-21 04:07:07.183706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:07.459 04:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.459 04:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76752 00:09:07.459 04:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 76752 ']' 00:09:07.459 04:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 76752 00:09:07.459 04:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:07.459 04:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.459 04:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76752 00:09:07.459 04:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.459 04:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.459 04:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76752' 00:09:07.459 killing process with pid 76752 00:09:07.459 04:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 76752 00:09:07.459 [2024-11-21 04:07:07.220722] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:07.459 04:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 76752 00:09:07.459 [2024-11-21 04:07:07.278850] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:07.719 04:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:07.719 00:09:07.719 real 0m9.066s 00:09:07.719 user 0m15.170s 00:09:07.719 sys 0m1.981s 00:09:07.719 ************************************ 00:09:07.719 END TEST raid_state_function_test 00:09:07.719 ************************************ 00:09:07.719 04:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.719 04:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.719 04:07:07 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:07.719 04:07:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:07.719 04:07:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.719 04:07:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:07.719 ************************************ 00:09:07.719 START TEST raid_state_function_test_sb 00:09:07.719 ************************************ 00:09:07.719 04:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:09:07.719 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:07.719 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:07.719 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:07.719 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:07.719 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:07.719 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.719 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:07.719 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.719 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.719 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:07.719 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.719 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.719 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:07.719 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.719 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.978 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:07.978 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:07.978 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:07.978 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:07.978 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:07.978 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:07.978 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:07.978 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:07.978 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:07.978 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:07.978 Process raid pid: 77362 00:09:07.978 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:07.978 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77362 00:09:07.978 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:07.978 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77362' 00:09:07.978 04:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77362 00:09:07.978 04:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77362 ']' 00:09:07.978 04:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.978 04:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.978 04:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.978 04:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.979 04:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.979 [2024-11-21 04:07:07.776309] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:09:07.979 [2024-11-21 04:07:07.776536] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.979 [2024-11-21 04:07:07.912665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.238 [2024-11-21 04:07:07.952733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.238 [2024-11-21 04:07:08.030585] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.238 [2024-11-21 04:07:08.030638] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.805 04:07:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.805 04:07:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:08.805 04:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:08.805 04:07:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.805 04:07:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.805 [2024-11-21 04:07:08.618484] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.805 [2024-11-21 04:07:08.618540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.805 [2024-11-21 04:07:08.618560] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.805 [2024-11-21 04:07:08.618571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.805 [2024-11-21 04:07:08.618577] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:08.805 [2024-11-21 04:07:08.618589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:08.805 04:07:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.805 04:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:08.805 04:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.806 04:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.806 04:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.806 04:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.806 04:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.806 04:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.806 04:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.806 04:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.806 04:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.806 04:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.806 04:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.806 04:07:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.806 04:07:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.806 04:07:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.806 04:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.806 "name": "Existed_Raid", 00:09:08.806 "uuid": "535af5a4-ca7c-41c7-9809-0748a99b4ea9", 00:09:08.806 "strip_size_kb": 64, 00:09:08.806 "state": "configuring", 00:09:08.806 "raid_level": "concat", 00:09:08.806 "superblock": true, 00:09:08.806 "num_base_bdevs": 3, 00:09:08.806 "num_base_bdevs_discovered": 0, 00:09:08.806 "num_base_bdevs_operational": 3, 00:09:08.806 "base_bdevs_list": [ 00:09:08.806 { 00:09:08.806 "name": "BaseBdev1", 00:09:08.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.806 "is_configured": false, 00:09:08.806 "data_offset": 0, 00:09:08.806 "data_size": 0 00:09:08.806 }, 00:09:08.806 { 00:09:08.806 "name": "BaseBdev2", 00:09:08.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.806 "is_configured": false, 00:09:08.806 "data_offset": 0, 00:09:08.806 "data_size": 0 00:09:08.806 }, 00:09:08.806 { 00:09:08.806 "name": "BaseBdev3", 00:09:08.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.806 "is_configured": false, 00:09:08.806 "data_offset": 0, 00:09:08.806 "data_size": 0 00:09:08.806 } 00:09:08.806 ] 00:09:08.806 }' 00:09:08.806 04:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.806 04:07:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.065 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.325 [2024-11-21 04:07:09.041592] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.325 [2024-11-21 04:07:09.041688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.325 [2024-11-21 04:07:09.053595] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:09.325 [2024-11-21 04:07:09.053694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:09.325 [2024-11-21 04:07:09.053750] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.325 [2024-11-21 04:07:09.053797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.325 [2024-11-21 04:07:09.053829] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.325 [2024-11-21 04:07:09.053879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.325 [2024-11-21 04:07:09.080715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.325 BaseBdev1 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.325 [ 00:09:09.325 { 00:09:09.325 "name": "BaseBdev1", 00:09:09.325 "aliases": [ 00:09:09.325 "c28a6798-f2ed-461a-9557-01983a3098b4" 00:09:09.325 ], 00:09:09.325 "product_name": "Malloc disk", 00:09:09.325 "block_size": 512, 00:09:09.325 "num_blocks": 65536, 00:09:09.325 "uuid": "c28a6798-f2ed-461a-9557-01983a3098b4", 00:09:09.325 "assigned_rate_limits": { 00:09:09.325 "rw_ios_per_sec": 0, 00:09:09.325 "rw_mbytes_per_sec": 0, 00:09:09.325 "r_mbytes_per_sec": 0, 00:09:09.325 "w_mbytes_per_sec": 0 00:09:09.325 }, 00:09:09.325 "claimed": true, 00:09:09.325 "claim_type": "exclusive_write", 00:09:09.325 "zoned": false, 00:09:09.325 "supported_io_types": { 00:09:09.325 "read": true, 00:09:09.325 "write": true, 00:09:09.325 "unmap": true, 00:09:09.325 "flush": true, 00:09:09.325 "reset": true, 00:09:09.325 "nvme_admin": false, 00:09:09.325 "nvme_io": false, 00:09:09.325 "nvme_io_md": false, 00:09:09.325 "write_zeroes": true, 00:09:09.325 "zcopy": true, 00:09:09.325 "get_zone_info": false, 00:09:09.325 "zone_management": false, 00:09:09.325 "zone_append": false, 00:09:09.325 "compare": false, 00:09:09.325 "compare_and_write": false, 00:09:09.325 "abort": true, 00:09:09.325 "seek_hole": false, 00:09:09.325 "seek_data": false, 00:09:09.325 "copy": true, 00:09:09.325 "nvme_iov_md": false 00:09:09.325 }, 00:09:09.325 "memory_domains": [ 00:09:09.325 { 00:09:09.325 "dma_device_id": "system", 00:09:09.325 "dma_device_type": 1 00:09:09.325 }, 00:09:09.325 { 00:09:09.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.325 "dma_device_type": 2 00:09:09.325 } 00:09:09.325 ], 00:09:09.325 "driver_specific": {} 00:09:09.325 } 00:09:09.325 ] 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.325 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.325 "name": "Existed_Raid", 00:09:09.325 "uuid": "780664e2-0881-4dec-8a81-9418cc20c144", 00:09:09.325 "strip_size_kb": 64, 00:09:09.325 "state": "configuring", 00:09:09.325 "raid_level": "concat", 00:09:09.325 "superblock": true, 00:09:09.325 "num_base_bdevs": 3, 00:09:09.325 "num_base_bdevs_discovered": 1, 00:09:09.325 "num_base_bdevs_operational": 3, 00:09:09.325 "base_bdevs_list": [ 00:09:09.325 { 00:09:09.325 "name": "BaseBdev1", 00:09:09.326 "uuid": "c28a6798-f2ed-461a-9557-01983a3098b4", 00:09:09.326 "is_configured": true, 00:09:09.326 "data_offset": 2048, 00:09:09.326 "data_size": 63488 00:09:09.326 }, 00:09:09.326 { 00:09:09.326 "name": "BaseBdev2", 00:09:09.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.326 "is_configured": false, 00:09:09.326 "data_offset": 0, 00:09:09.326 "data_size": 0 00:09:09.326 }, 00:09:09.326 { 00:09:09.326 "name": "BaseBdev3", 00:09:09.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.326 "is_configured": false, 00:09:09.326 "data_offset": 0, 00:09:09.326 "data_size": 0 00:09:09.326 } 00:09:09.326 ] 00:09:09.326 }' 00:09:09.326 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.326 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.895 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.895 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.895 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.895 [2024-11-21 04:07:09.568046] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.895 [2024-11-21 04:07:09.568158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:09.895 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.895 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.895 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.895 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.895 [2024-11-21 04:07:09.580067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.895 [2024-11-21 04:07:09.582296] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.895 [2024-11-21 04:07:09.582393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.895 [2024-11-21 04:07:09.582413] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.895 [2024-11-21 04:07:09.582425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.895 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.895 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:09.895 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:09.895 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:09.895 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.895 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.895 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.895 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.895 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.895 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.895 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.895 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.895 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.895 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.895 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.895 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.895 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.895 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.895 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.895 "name": "Existed_Raid", 00:09:09.895 "uuid": "f6c3f864-5579-490a-a6d7-83d75b8b4c7e", 00:09:09.895 "strip_size_kb": 64, 00:09:09.895 "state": "configuring", 00:09:09.895 "raid_level": "concat", 00:09:09.895 "superblock": true, 00:09:09.895 "num_base_bdevs": 3, 00:09:09.895 "num_base_bdevs_discovered": 1, 00:09:09.895 "num_base_bdevs_operational": 3, 00:09:09.895 "base_bdevs_list": [ 00:09:09.895 { 00:09:09.895 "name": "BaseBdev1", 00:09:09.895 "uuid": "c28a6798-f2ed-461a-9557-01983a3098b4", 00:09:09.895 "is_configured": true, 00:09:09.895 "data_offset": 2048, 00:09:09.895 "data_size": 63488 00:09:09.895 }, 00:09:09.895 { 00:09:09.895 "name": "BaseBdev2", 00:09:09.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.895 "is_configured": false, 00:09:09.895 "data_offset": 0, 00:09:09.895 "data_size": 0 00:09:09.895 }, 00:09:09.895 { 00:09:09.895 "name": "BaseBdev3", 00:09:09.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.895 "is_configured": false, 00:09:09.895 "data_offset": 0, 00:09:09.896 "data_size": 0 00:09:09.896 } 00:09:09.896 ] 00:09:09.896 }' 00:09:09.896 04:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.896 04:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.159 [2024-11-21 04:07:10.060320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.159 BaseBdev2 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.159 [ 00:09:10.159 { 00:09:10.159 "name": "BaseBdev2", 00:09:10.159 "aliases": [ 00:09:10.159 "ec48c6b7-6a33-4302-a979-57872f758223" 00:09:10.159 ], 00:09:10.159 "product_name": "Malloc disk", 00:09:10.159 "block_size": 512, 00:09:10.159 "num_blocks": 65536, 00:09:10.159 "uuid": "ec48c6b7-6a33-4302-a979-57872f758223", 00:09:10.159 "assigned_rate_limits": { 00:09:10.159 "rw_ios_per_sec": 0, 00:09:10.159 "rw_mbytes_per_sec": 0, 00:09:10.159 "r_mbytes_per_sec": 0, 00:09:10.159 "w_mbytes_per_sec": 0 00:09:10.159 }, 00:09:10.159 "claimed": true, 00:09:10.159 "claim_type": "exclusive_write", 00:09:10.159 "zoned": false, 00:09:10.159 "supported_io_types": { 00:09:10.159 "read": true, 00:09:10.159 "write": true, 00:09:10.159 "unmap": true, 00:09:10.159 "flush": true, 00:09:10.159 "reset": true, 00:09:10.159 "nvme_admin": false, 00:09:10.159 "nvme_io": false, 00:09:10.159 "nvme_io_md": false, 00:09:10.159 "write_zeroes": true, 00:09:10.159 "zcopy": true, 00:09:10.159 "get_zone_info": false, 00:09:10.159 "zone_management": false, 00:09:10.159 "zone_append": false, 00:09:10.159 "compare": false, 00:09:10.159 "compare_and_write": false, 00:09:10.159 "abort": true, 00:09:10.159 "seek_hole": false, 00:09:10.159 "seek_data": false, 00:09:10.159 "copy": true, 00:09:10.159 "nvme_iov_md": false 00:09:10.159 }, 00:09:10.159 "memory_domains": [ 00:09:10.159 { 00:09:10.159 "dma_device_id": "system", 00:09:10.159 "dma_device_type": 1 00:09:10.159 }, 00:09:10.159 { 00:09:10.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.159 "dma_device_type": 2 00:09:10.159 } 00:09:10.159 ], 00:09:10.159 "driver_specific": {} 00:09:10.159 } 00:09:10.159 ] 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.159 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.419 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.419 "name": "Existed_Raid", 00:09:10.419 "uuid": "f6c3f864-5579-490a-a6d7-83d75b8b4c7e", 00:09:10.419 "strip_size_kb": 64, 00:09:10.419 "state": "configuring", 00:09:10.419 "raid_level": "concat", 00:09:10.419 "superblock": true, 00:09:10.419 "num_base_bdevs": 3, 00:09:10.419 "num_base_bdevs_discovered": 2, 00:09:10.419 "num_base_bdevs_operational": 3, 00:09:10.419 "base_bdevs_list": [ 00:09:10.419 { 00:09:10.419 "name": "BaseBdev1", 00:09:10.419 "uuid": "c28a6798-f2ed-461a-9557-01983a3098b4", 00:09:10.419 "is_configured": true, 00:09:10.419 "data_offset": 2048, 00:09:10.419 "data_size": 63488 00:09:10.419 }, 00:09:10.419 { 00:09:10.419 "name": "BaseBdev2", 00:09:10.419 "uuid": "ec48c6b7-6a33-4302-a979-57872f758223", 00:09:10.419 "is_configured": true, 00:09:10.419 "data_offset": 2048, 00:09:10.419 "data_size": 63488 00:09:10.419 }, 00:09:10.419 { 00:09:10.419 "name": "BaseBdev3", 00:09:10.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.419 "is_configured": false, 00:09:10.419 "data_offset": 0, 00:09:10.419 "data_size": 0 00:09:10.419 } 00:09:10.419 ] 00:09:10.419 }' 00:09:10.419 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.419 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.679 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:10.679 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.679 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.679 [2024-11-21 04:07:10.544894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.679 [2024-11-21 04:07:10.545137] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:10.679 [2024-11-21 04:07:10.545160] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:10.679 [2024-11-21 04:07:10.545556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:10.679 BaseBdev3 00:09:10.679 [2024-11-21 04:07:10.545807] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:10.679 [2024-11-21 04:07:10.545825] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:10.679 [2024-11-21 04:07:10.545974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.679 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.679 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:10.679 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:10.679 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.680 [ 00:09:10.680 { 00:09:10.680 "name": "BaseBdev3", 00:09:10.680 "aliases": [ 00:09:10.680 "99b726a7-dfd5-4ce8-a38f-b91872c34853" 00:09:10.680 ], 00:09:10.680 "product_name": "Malloc disk", 00:09:10.680 "block_size": 512, 00:09:10.680 "num_blocks": 65536, 00:09:10.680 "uuid": "99b726a7-dfd5-4ce8-a38f-b91872c34853", 00:09:10.680 "assigned_rate_limits": { 00:09:10.680 "rw_ios_per_sec": 0, 00:09:10.680 "rw_mbytes_per_sec": 0, 00:09:10.680 "r_mbytes_per_sec": 0, 00:09:10.680 "w_mbytes_per_sec": 0 00:09:10.680 }, 00:09:10.680 "claimed": true, 00:09:10.680 "claim_type": "exclusive_write", 00:09:10.680 "zoned": false, 00:09:10.680 "supported_io_types": { 00:09:10.680 "read": true, 00:09:10.680 "write": true, 00:09:10.680 "unmap": true, 00:09:10.680 "flush": true, 00:09:10.680 "reset": true, 00:09:10.680 "nvme_admin": false, 00:09:10.680 "nvme_io": false, 00:09:10.680 "nvme_io_md": false, 00:09:10.680 "write_zeroes": true, 00:09:10.680 "zcopy": true, 00:09:10.680 "get_zone_info": false, 00:09:10.680 "zone_management": false, 00:09:10.680 "zone_append": false, 00:09:10.680 "compare": false, 00:09:10.680 "compare_and_write": false, 00:09:10.680 "abort": true, 00:09:10.680 "seek_hole": false, 00:09:10.680 "seek_data": false, 00:09:10.680 "copy": true, 00:09:10.680 "nvme_iov_md": false 00:09:10.680 }, 00:09:10.680 "memory_domains": [ 00:09:10.680 { 00:09:10.680 "dma_device_id": "system", 00:09:10.680 "dma_device_type": 1 00:09:10.680 }, 00:09:10.680 { 00:09:10.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.680 "dma_device_type": 2 00:09:10.680 } 00:09:10.680 ], 00:09:10.680 "driver_specific": {} 00:09:10.680 } 00:09:10.680 ] 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.680 "name": "Existed_Raid", 00:09:10.680 "uuid": "f6c3f864-5579-490a-a6d7-83d75b8b4c7e", 00:09:10.680 "strip_size_kb": 64, 00:09:10.680 "state": "online", 00:09:10.680 "raid_level": "concat", 00:09:10.680 "superblock": true, 00:09:10.680 "num_base_bdevs": 3, 00:09:10.680 "num_base_bdevs_discovered": 3, 00:09:10.680 "num_base_bdevs_operational": 3, 00:09:10.680 "base_bdevs_list": [ 00:09:10.680 { 00:09:10.680 "name": "BaseBdev1", 00:09:10.680 "uuid": "c28a6798-f2ed-461a-9557-01983a3098b4", 00:09:10.680 "is_configured": true, 00:09:10.680 "data_offset": 2048, 00:09:10.680 "data_size": 63488 00:09:10.680 }, 00:09:10.680 { 00:09:10.680 "name": "BaseBdev2", 00:09:10.680 "uuid": "ec48c6b7-6a33-4302-a979-57872f758223", 00:09:10.680 "is_configured": true, 00:09:10.680 "data_offset": 2048, 00:09:10.680 "data_size": 63488 00:09:10.680 }, 00:09:10.680 { 00:09:10.680 "name": "BaseBdev3", 00:09:10.680 "uuid": "99b726a7-dfd5-4ce8-a38f-b91872c34853", 00:09:10.680 "is_configured": true, 00:09:10.680 "data_offset": 2048, 00:09:10.680 "data_size": 63488 00:09:10.680 } 00:09:10.680 ] 00:09:10.680 }' 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.680 04:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.250 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:11.250 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:11.250 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:11.250 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:11.250 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:11.250 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:11.250 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:11.250 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.250 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.250 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:11.250 [2024-11-21 04:07:11.052619] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.250 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.250 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:11.250 "name": "Existed_Raid", 00:09:11.250 "aliases": [ 00:09:11.250 "f6c3f864-5579-490a-a6d7-83d75b8b4c7e" 00:09:11.250 ], 00:09:11.250 "product_name": "Raid Volume", 00:09:11.250 "block_size": 512, 00:09:11.250 "num_blocks": 190464, 00:09:11.250 "uuid": "f6c3f864-5579-490a-a6d7-83d75b8b4c7e", 00:09:11.250 "assigned_rate_limits": { 00:09:11.250 "rw_ios_per_sec": 0, 00:09:11.250 "rw_mbytes_per_sec": 0, 00:09:11.250 "r_mbytes_per_sec": 0, 00:09:11.250 "w_mbytes_per_sec": 0 00:09:11.250 }, 00:09:11.250 "claimed": false, 00:09:11.250 "zoned": false, 00:09:11.250 "supported_io_types": { 00:09:11.250 "read": true, 00:09:11.250 "write": true, 00:09:11.250 "unmap": true, 00:09:11.250 "flush": true, 00:09:11.250 "reset": true, 00:09:11.250 "nvme_admin": false, 00:09:11.250 "nvme_io": false, 00:09:11.250 "nvme_io_md": false, 00:09:11.250 "write_zeroes": true, 00:09:11.250 "zcopy": false, 00:09:11.250 "get_zone_info": false, 00:09:11.250 "zone_management": false, 00:09:11.250 "zone_append": false, 00:09:11.250 "compare": false, 00:09:11.250 "compare_and_write": false, 00:09:11.250 "abort": false, 00:09:11.250 "seek_hole": false, 00:09:11.250 "seek_data": false, 00:09:11.250 "copy": false, 00:09:11.250 "nvme_iov_md": false 00:09:11.250 }, 00:09:11.250 "memory_domains": [ 00:09:11.250 { 00:09:11.250 "dma_device_id": "system", 00:09:11.250 "dma_device_type": 1 00:09:11.250 }, 00:09:11.250 { 00:09:11.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.250 "dma_device_type": 2 00:09:11.250 }, 00:09:11.250 { 00:09:11.250 "dma_device_id": "system", 00:09:11.250 "dma_device_type": 1 00:09:11.250 }, 00:09:11.250 { 00:09:11.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.250 "dma_device_type": 2 00:09:11.250 }, 00:09:11.250 { 00:09:11.250 "dma_device_id": "system", 00:09:11.250 "dma_device_type": 1 00:09:11.250 }, 00:09:11.250 { 00:09:11.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.250 "dma_device_type": 2 00:09:11.250 } 00:09:11.250 ], 00:09:11.250 "driver_specific": { 00:09:11.250 "raid": { 00:09:11.250 "uuid": "f6c3f864-5579-490a-a6d7-83d75b8b4c7e", 00:09:11.250 "strip_size_kb": 64, 00:09:11.250 "state": "online", 00:09:11.250 "raid_level": "concat", 00:09:11.250 "superblock": true, 00:09:11.250 "num_base_bdevs": 3, 00:09:11.250 "num_base_bdevs_discovered": 3, 00:09:11.250 "num_base_bdevs_operational": 3, 00:09:11.250 "base_bdevs_list": [ 00:09:11.250 { 00:09:11.250 "name": "BaseBdev1", 00:09:11.250 "uuid": "c28a6798-f2ed-461a-9557-01983a3098b4", 00:09:11.250 "is_configured": true, 00:09:11.250 "data_offset": 2048, 00:09:11.250 "data_size": 63488 00:09:11.250 }, 00:09:11.250 { 00:09:11.250 "name": "BaseBdev2", 00:09:11.250 "uuid": "ec48c6b7-6a33-4302-a979-57872f758223", 00:09:11.250 "is_configured": true, 00:09:11.250 "data_offset": 2048, 00:09:11.250 "data_size": 63488 00:09:11.250 }, 00:09:11.250 { 00:09:11.250 "name": "BaseBdev3", 00:09:11.250 "uuid": "99b726a7-dfd5-4ce8-a38f-b91872c34853", 00:09:11.250 "is_configured": true, 00:09:11.250 "data_offset": 2048, 00:09:11.250 "data_size": 63488 00:09:11.250 } 00:09:11.250 ] 00:09:11.250 } 00:09:11.250 } 00:09:11.250 }' 00:09:11.250 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:11.250 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:11.250 BaseBdev2 00:09:11.250 BaseBdev3' 00:09:11.250 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.250 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:11.250 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.250 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.251 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:11.251 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.251 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.251 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.251 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.251 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.251 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.251 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:11.251 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.251 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.251 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.251 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.511 [2024-11-21 04:07:11.283952] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.511 [2024-11-21 04:07:11.284045] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.511 [2024-11-21 04:07:11.284129] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.511 "name": "Existed_Raid", 00:09:11.511 "uuid": "f6c3f864-5579-490a-a6d7-83d75b8b4c7e", 00:09:11.511 "strip_size_kb": 64, 00:09:11.511 "state": "offline", 00:09:11.511 "raid_level": "concat", 00:09:11.511 "superblock": true, 00:09:11.511 "num_base_bdevs": 3, 00:09:11.511 "num_base_bdevs_discovered": 2, 00:09:11.511 "num_base_bdevs_operational": 2, 00:09:11.511 "base_bdevs_list": [ 00:09:11.511 { 00:09:11.511 "name": null, 00:09:11.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.511 "is_configured": false, 00:09:11.511 "data_offset": 0, 00:09:11.511 "data_size": 63488 00:09:11.511 }, 00:09:11.511 { 00:09:11.511 "name": "BaseBdev2", 00:09:11.511 "uuid": "ec48c6b7-6a33-4302-a979-57872f758223", 00:09:11.511 "is_configured": true, 00:09:11.511 "data_offset": 2048, 00:09:11.511 "data_size": 63488 00:09:11.511 }, 00:09:11.511 { 00:09:11.511 "name": "BaseBdev3", 00:09:11.511 "uuid": "99b726a7-dfd5-4ce8-a38f-b91872c34853", 00:09:11.511 "is_configured": true, 00:09:11.511 "data_offset": 2048, 00:09:11.511 "data_size": 63488 00:09:11.511 } 00:09:11.511 ] 00:09:11.511 }' 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.511 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.770 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:11.770 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:11.770 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.770 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:11.770 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.770 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.030 [2024-11-21 04:07:11.784213] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.030 [2024-11-21 04:07:11.860900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:12.030 [2024-11-21 04:07:11.860963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.030 BaseBdev2 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.030 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.030 [ 00:09:12.030 { 00:09:12.030 "name": "BaseBdev2", 00:09:12.030 "aliases": [ 00:09:12.030 "05c77bb8-4f4a-420c-ab72-b50bdb661588" 00:09:12.030 ], 00:09:12.030 "product_name": "Malloc disk", 00:09:12.030 "block_size": 512, 00:09:12.030 "num_blocks": 65536, 00:09:12.030 "uuid": "05c77bb8-4f4a-420c-ab72-b50bdb661588", 00:09:12.030 "assigned_rate_limits": { 00:09:12.030 "rw_ios_per_sec": 0, 00:09:12.030 "rw_mbytes_per_sec": 0, 00:09:12.030 "r_mbytes_per_sec": 0, 00:09:12.030 "w_mbytes_per_sec": 0 00:09:12.030 }, 00:09:12.030 "claimed": false, 00:09:12.030 "zoned": false, 00:09:12.031 "supported_io_types": { 00:09:12.031 "read": true, 00:09:12.031 "write": true, 00:09:12.031 "unmap": true, 00:09:12.031 "flush": true, 00:09:12.031 "reset": true, 00:09:12.031 "nvme_admin": false, 00:09:12.031 "nvme_io": false, 00:09:12.031 "nvme_io_md": false, 00:09:12.031 "write_zeroes": true, 00:09:12.031 "zcopy": true, 00:09:12.031 "get_zone_info": false, 00:09:12.031 "zone_management": false, 00:09:12.031 "zone_append": false, 00:09:12.031 "compare": false, 00:09:12.031 "compare_and_write": false, 00:09:12.031 "abort": true, 00:09:12.031 "seek_hole": false, 00:09:12.031 "seek_data": false, 00:09:12.031 "copy": true, 00:09:12.031 "nvme_iov_md": false 00:09:12.031 }, 00:09:12.031 "memory_domains": [ 00:09:12.031 { 00:09:12.031 "dma_device_id": "system", 00:09:12.031 "dma_device_type": 1 00:09:12.031 }, 00:09:12.031 { 00:09:12.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.031 "dma_device_type": 2 00:09:12.031 } 00:09:12.031 ], 00:09:12.031 "driver_specific": {} 00:09:12.031 } 00:09:12.031 ] 00:09:12.031 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.031 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:12.031 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:12.031 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.031 04:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:12.031 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.031 04:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.291 BaseBdev3 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.291 [ 00:09:12.291 { 00:09:12.291 "name": "BaseBdev3", 00:09:12.291 "aliases": [ 00:09:12.291 "b16dad26-fdae-42f2-a4ca-d5a49e0bc9ec" 00:09:12.291 ], 00:09:12.291 "product_name": "Malloc disk", 00:09:12.291 "block_size": 512, 00:09:12.291 "num_blocks": 65536, 00:09:12.291 "uuid": "b16dad26-fdae-42f2-a4ca-d5a49e0bc9ec", 00:09:12.291 "assigned_rate_limits": { 00:09:12.291 "rw_ios_per_sec": 0, 00:09:12.291 "rw_mbytes_per_sec": 0, 00:09:12.291 "r_mbytes_per_sec": 0, 00:09:12.291 "w_mbytes_per_sec": 0 00:09:12.291 }, 00:09:12.291 "claimed": false, 00:09:12.291 "zoned": false, 00:09:12.291 "supported_io_types": { 00:09:12.291 "read": true, 00:09:12.291 "write": true, 00:09:12.291 "unmap": true, 00:09:12.291 "flush": true, 00:09:12.291 "reset": true, 00:09:12.291 "nvme_admin": false, 00:09:12.291 "nvme_io": false, 00:09:12.291 "nvme_io_md": false, 00:09:12.291 "write_zeroes": true, 00:09:12.291 "zcopy": true, 00:09:12.291 "get_zone_info": false, 00:09:12.291 "zone_management": false, 00:09:12.291 "zone_append": false, 00:09:12.291 "compare": false, 00:09:12.291 "compare_and_write": false, 00:09:12.291 "abort": true, 00:09:12.291 "seek_hole": false, 00:09:12.291 "seek_data": false, 00:09:12.291 "copy": true, 00:09:12.291 "nvme_iov_md": false 00:09:12.291 }, 00:09:12.291 "memory_domains": [ 00:09:12.291 { 00:09:12.291 "dma_device_id": "system", 00:09:12.291 "dma_device_type": 1 00:09:12.291 }, 00:09:12.291 { 00:09:12.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.291 "dma_device_type": 2 00:09:12.291 } 00:09:12.291 ], 00:09:12.291 "driver_specific": {} 00:09:12.291 } 00:09:12.291 ] 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.291 [2024-11-21 04:07:12.058845] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:12.291 [2024-11-21 04:07:12.058894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:12.291 [2024-11-21 04:07:12.058917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:12.291 [2024-11-21 04:07:12.061110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.291 "name": "Existed_Raid", 00:09:12.291 "uuid": "091d77f9-4519-42a9-abf4-cfa243698652", 00:09:12.291 "strip_size_kb": 64, 00:09:12.291 "state": "configuring", 00:09:12.291 "raid_level": "concat", 00:09:12.291 "superblock": true, 00:09:12.291 "num_base_bdevs": 3, 00:09:12.291 "num_base_bdevs_discovered": 2, 00:09:12.291 "num_base_bdevs_operational": 3, 00:09:12.291 "base_bdevs_list": [ 00:09:12.291 { 00:09:12.291 "name": "BaseBdev1", 00:09:12.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.291 "is_configured": false, 00:09:12.291 "data_offset": 0, 00:09:12.291 "data_size": 0 00:09:12.291 }, 00:09:12.291 { 00:09:12.291 "name": "BaseBdev2", 00:09:12.291 "uuid": "05c77bb8-4f4a-420c-ab72-b50bdb661588", 00:09:12.291 "is_configured": true, 00:09:12.291 "data_offset": 2048, 00:09:12.291 "data_size": 63488 00:09:12.291 }, 00:09:12.291 { 00:09:12.291 "name": "BaseBdev3", 00:09:12.291 "uuid": "b16dad26-fdae-42f2-a4ca-d5a49e0bc9ec", 00:09:12.291 "is_configured": true, 00:09:12.291 "data_offset": 2048, 00:09:12.291 "data_size": 63488 00:09:12.291 } 00:09:12.291 ] 00:09:12.291 }' 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.291 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.551 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:12.551 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.551 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.551 [2024-11-21 04:07:12.482170] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:12.551 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.551 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:12.551 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.551 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.551 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.551 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.551 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.551 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.551 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.551 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.551 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.551 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.551 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.551 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.551 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.551 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.810 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.810 "name": "Existed_Raid", 00:09:12.810 "uuid": "091d77f9-4519-42a9-abf4-cfa243698652", 00:09:12.810 "strip_size_kb": 64, 00:09:12.810 "state": "configuring", 00:09:12.810 "raid_level": "concat", 00:09:12.810 "superblock": true, 00:09:12.810 "num_base_bdevs": 3, 00:09:12.810 "num_base_bdevs_discovered": 1, 00:09:12.810 "num_base_bdevs_operational": 3, 00:09:12.810 "base_bdevs_list": [ 00:09:12.811 { 00:09:12.811 "name": "BaseBdev1", 00:09:12.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.811 "is_configured": false, 00:09:12.811 "data_offset": 0, 00:09:12.811 "data_size": 0 00:09:12.811 }, 00:09:12.811 { 00:09:12.811 "name": null, 00:09:12.811 "uuid": "05c77bb8-4f4a-420c-ab72-b50bdb661588", 00:09:12.811 "is_configured": false, 00:09:12.811 "data_offset": 0, 00:09:12.811 "data_size": 63488 00:09:12.811 }, 00:09:12.811 { 00:09:12.811 "name": "BaseBdev3", 00:09:12.811 "uuid": "b16dad26-fdae-42f2-a4ca-d5a49e0bc9ec", 00:09:12.811 "is_configured": true, 00:09:12.811 "data_offset": 2048, 00:09:12.811 "data_size": 63488 00:09:12.811 } 00:09:12.811 ] 00:09:12.811 }' 00:09:12.811 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.811 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.070 [2024-11-21 04:07:12.950212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.070 BaseBdev1 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.070 [ 00:09:13.070 { 00:09:13.070 "name": "BaseBdev1", 00:09:13.070 "aliases": [ 00:09:13.070 "0117c8a3-55a3-4c03-8cf0-095081a2688c" 00:09:13.070 ], 00:09:13.070 "product_name": "Malloc disk", 00:09:13.070 "block_size": 512, 00:09:13.070 "num_blocks": 65536, 00:09:13.070 "uuid": "0117c8a3-55a3-4c03-8cf0-095081a2688c", 00:09:13.070 "assigned_rate_limits": { 00:09:13.070 "rw_ios_per_sec": 0, 00:09:13.070 "rw_mbytes_per_sec": 0, 00:09:13.070 "r_mbytes_per_sec": 0, 00:09:13.070 "w_mbytes_per_sec": 0 00:09:13.070 }, 00:09:13.070 "claimed": true, 00:09:13.070 "claim_type": "exclusive_write", 00:09:13.070 "zoned": false, 00:09:13.070 "supported_io_types": { 00:09:13.070 "read": true, 00:09:13.070 "write": true, 00:09:13.070 "unmap": true, 00:09:13.070 "flush": true, 00:09:13.070 "reset": true, 00:09:13.070 "nvme_admin": false, 00:09:13.070 "nvme_io": false, 00:09:13.070 "nvme_io_md": false, 00:09:13.070 "write_zeroes": true, 00:09:13.070 "zcopy": true, 00:09:13.070 "get_zone_info": false, 00:09:13.070 "zone_management": false, 00:09:13.070 "zone_append": false, 00:09:13.070 "compare": false, 00:09:13.070 "compare_and_write": false, 00:09:13.070 "abort": true, 00:09:13.070 "seek_hole": false, 00:09:13.070 "seek_data": false, 00:09:13.070 "copy": true, 00:09:13.070 "nvme_iov_md": false 00:09:13.070 }, 00:09:13.070 "memory_domains": [ 00:09:13.070 { 00:09:13.070 "dma_device_id": "system", 00:09:13.070 "dma_device_type": 1 00:09:13.070 }, 00:09:13.070 { 00:09:13.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.070 "dma_device_type": 2 00:09:13.070 } 00:09:13.070 ], 00:09:13.070 "driver_specific": {} 00:09:13.070 } 00:09:13.070 ] 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.070 04:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.070 04:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.070 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.070 "name": "Existed_Raid", 00:09:13.070 "uuid": "091d77f9-4519-42a9-abf4-cfa243698652", 00:09:13.070 "strip_size_kb": 64, 00:09:13.070 "state": "configuring", 00:09:13.070 "raid_level": "concat", 00:09:13.070 "superblock": true, 00:09:13.070 "num_base_bdevs": 3, 00:09:13.070 "num_base_bdevs_discovered": 2, 00:09:13.070 "num_base_bdevs_operational": 3, 00:09:13.070 "base_bdevs_list": [ 00:09:13.070 { 00:09:13.071 "name": "BaseBdev1", 00:09:13.071 "uuid": "0117c8a3-55a3-4c03-8cf0-095081a2688c", 00:09:13.071 "is_configured": true, 00:09:13.071 "data_offset": 2048, 00:09:13.071 "data_size": 63488 00:09:13.071 }, 00:09:13.071 { 00:09:13.071 "name": null, 00:09:13.071 "uuid": "05c77bb8-4f4a-420c-ab72-b50bdb661588", 00:09:13.071 "is_configured": false, 00:09:13.071 "data_offset": 0, 00:09:13.071 "data_size": 63488 00:09:13.071 }, 00:09:13.071 { 00:09:13.071 "name": "BaseBdev3", 00:09:13.071 "uuid": "b16dad26-fdae-42f2-a4ca-d5a49e0bc9ec", 00:09:13.071 "is_configured": true, 00:09:13.071 "data_offset": 2048, 00:09:13.071 "data_size": 63488 00:09:13.071 } 00:09:13.071 ] 00:09:13.071 }' 00:09:13.071 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.330 04:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.590 [2024-11-21 04:07:13.493406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.590 "name": "Existed_Raid", 00:09:13.590 "uuid": "091d77f9-4519-42a9-abf4-cfa243698652", 00:09:13.590 "strip_size_kb": 64, 00:09:13.590 "state": "configuring", 00:09:13.590 "raid_level": "concat", 00:09:13.590 "superblock": true, 00:09:13.590 "num_base_bdevs": 3, 00:09:13.590 "num_base_bdevs_discovered": 1, 00:09:13.590 "num_base_bdevs_operational": 3, 00:09:13.590 "base_bdevs_list": [ 00:09:13.590 { 00:09:13.590 "name": "BaseBdev1", 00:09:13.590 "uuid": "0117c8a3-55a3-4c03-8cf0-095081a2688c", 00:09:13.590 "is_configured": true, 00:09:13.590 "data_offset": 2048, 00:09:13.590 "data_size": 63488 00:09:13.590 }, 00:09:13.590 { 00:09:13.590 "name": null, 00:09:13.590 "uuid": "05c77bb8-4f4a-420c-ab72-b50bdb661588", 00:09:13.590 "is_configured": false, 00:09:13.590 "data_offset": 0, 00:09:13.590 "data_size": 63488 00:09:13.590 }, 00:09:13.590 { 00:09:13.590 "name": null, 00:09:13.590 "uuid": "b16dad26-fdae-42f2-a4ca-d5a49e0bc9ec", 00:09:13.590 "is_configured": false, 00:09:13.590 "data_offset": 0, 00:09:13.590 "data_size": 63488 00:09:13.590 } 00:09:13.590 ] 00:09:13.590 }' 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.590 04:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.166 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:14.166 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.166 04:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.166 04:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.167 04:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.167 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:14.167 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:14.167 04:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.167 04:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.167 [2024-11-21 04:07:13.916668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.167 04:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.167 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:14.167 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.167 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.167 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.167 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.167 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.167 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.167 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.167 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.167 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.167 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.167 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.167 04:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.167 04:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.167 04:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.167 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.167 "name": "Existed_Raid", 00:09:14.167 "uuid": "091d77f9-4519-42a9-abf4-cfa243698652", 00:09:14.167 "strip_size_kb": 64, 00:09:14.167 "state": "configuring", 00:09:14.167 "raid_level": "concat", 00:09:14.167 "superblock": true, 00:09:14.167 "num_base_bdevs": 3, 00:09:14.167 "num_base_bdevs_discovered": 2, 00:09:14.167 "num_base_bdevs_operational": 3, 00:09:14.167 "base_bdevs_list": [ 00:09:14.167 { 00:09:14.167 "name": "BaseBdev1", 00:09:14.167 "uuid": "0117c8a3-55a3-4c03-8cf0-095081a2688c", 00:09:14.167 "is_configured": true, 00:09:14.167 "data_offset": 2048, 00:09:14.167 "data_size": 63488 00:09:14.167 }, 00:09:14.167 { 00:09:14.167 "name": null, 00:09:14.167 "uuid": "05c77bb8-4f4a-420c-ab72-b50bdb661588", 00:09:14.167 "is_configured": false, 00:09:14.167 "data_offset": 0, 00:09:14.167 "data_size": 63488 00:09:14.167 }, 00:09:14.167 { 00:09:14.167 "name": "BaseBdev3", 00:09:14.167 "uuid": "b16dad26-fdae-42f2-a4ca-d5a49e0bc9ec", 00:09:14.167 "is_configured": true, 00:09:14.167 "data_offset": 2048, 00:09:14.167 "data_size": 63488 00:09:14.167 } 00:09:14.167 ] 00:09:14.167 }' 00:09:14.167 04:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.167 04:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.426 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:14.426 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.426 04:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.426 04:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.426 04:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.686 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:14.686 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:14.686 04:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.686 04:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.686 [2024-11-21 04:07:14.416283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:14.686 04:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.686 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:14.686 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.686 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.686 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.686 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.686 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.686 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.686 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.686 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.686 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.686 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.686 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.686 04:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.686 04:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.686 04:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.686 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.686 "name": "Existed_Raid", 00:09:14.686 "uuid": "091d77f9-4519-42a9-abf4-cfa243698652", 00:09:14.686 "strip_size_kb": 64, 00:09:14.686 "state": "configuring", 00:09:14.686 "raid_level": "concat", 00:09:14.686 "superblock": true, 00:09:14.686 "num_base_bdevs": 3, 00:09:14.686 "num_base_bdevs_discovered": 1, 00:09:14.686 "num_base_bdevs_operational": 3, 00:09:14.686 "base_bdevs_list": [ 00:09:14.686 { 00:09:14.686 "name": null, 00:09:14.686 "uuid": "0117c8a3-55a3-4c03-8cf0-095081a2688c", 00:09:14.686 "is_configured": false, 00:09:14.686 "data_offset": 0, 00:09:14.686 "data_size": 63488 00:09:14.686 }, 00:09:14.686 { 00:09:14.686 "name": null, 00:09:14.686 "uuid": "05c77bb8-4f4a-420c-ab72-b50bdb661588", 00:09:14.686 "is_configured": false, 00:09:14.686 "data_offset": 0, 00:09:14.686 "data_size": 63488 00:09:14.686 }, 00:09:14.686 { 00:09:14.686 "name": "BaseBdev3", 00:09:14.686 "uuid": "b16dad26-fdae-42f2-a4ca-d5a49e0bc9ec", 00:09:14.686 "is_configured": true, 00:09:14.686 "data_offset": 2048, 00:09:14.686 "data_size": 63488 00:09:14.686 } 00:09:14.686 ] 00:09:14.686 }' 00:09:14.686 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.686 04:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.946 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:14.946 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.946 04:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.946 04:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.946 04:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.205 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:15.205 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:15.205 04:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.205 04:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.205 [2024-11-21 04:07:14.927836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.205 04:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.205 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:15.205 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.205 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.205 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.205 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.205 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.205 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.205 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.205 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.205 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.205 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.205 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.205 04:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.205 04:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.205 04:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.205 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.205 "name": "Existed_Raid", 00:09:15.205 "uuid": "091d77f9-4519-42a9-abf4-cfa243698652", 00:09:15.205 "strip_size_kb": 64, 00:09:15.205 "state": "configuring", 00:09:15.205 "raid_level": "concat", 00:09:15.205 "superblock": true, 00:09:15.205 "num_base_bdevs": 3, 00:09:15.205 "num_base_bdevs_discovered": 2, 00:09:15.205 "num_base_bdevs_operational": 3, 00:09:15.205 "base_bdevs_list": [ 00:09:15.205 { 00:09:15.205 "name": null, 00:09:15.205 "uuid": "0117c8a3-55a3-4c03-8cf0-095081a2688c", 00:09:15.205 "is_configured": false, 00:09:15.205 "data_offset": 0, 00:09:15.205 "data_size": 63488 00:09:15.205 }, 00:09:15.205 { 00:09:15.205 "name": "BaseBdev2", 00:09:15.205 "uuid": "05c77bb8-4f4a-420c-ab72-b50bdb661588", 00:09:15.205 "is_configured": true, 00:09:15.205 "data_offset": 2048, 00:09:15.205 "data_size": 63488 00:09:15.205 }, 00:09:15.205 { 00:09:15.205 "name": "BaseBdev3", 00:09:15.205 "uuid": "b16dad26-fdae-42f2-a4ca-d5a49e0bc9ec", 00:09:15.205 "is_configured": true, 00:09:15.205 "data_offset": 2048, 00:09:15.205 "data_size": 63488 00:09:15.205 } 00:09:15.205 ] 00:09:15.205 }' 00:09:15.205 04:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.205 04:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.465 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.465 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.465 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.465 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:15.465 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.725 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:15.725 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.725 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:15.725 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.725 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.725 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.725 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0117c8a3-55a3-4c03-8cf0-095081a2688c 00:09:15.725 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.725 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.725 [2024-11-21 04:07:15.519718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:15.725 [2024-11-21 04:07:15.520075] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:15.725 [2024-11-21 04:07:15.520138] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:15.725 [2024-11-21 04:07:15.520487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:09:15.725 NewBaseBdev 00:09:15.725 [2024-11-21 04:07:15.520705] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:15.725 [2024-11-21 04:07:15.520722] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:15.725 [2024-11-21 04:07:15.520852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.725 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.725 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.726 [ 00:09:15.726 { 00:09:15.726 "name": "NewBaseBdev", 00:09:15.726 "aliases": [ 00:09:15.726 "0117c8a3-55a3-4c03-8cf0-095081a2688c" 00:09:15.726 ], 00:09:15.726 "product_name": "Malloc disk", 00:09:15.726 "block_size": 512, 00:09:15.726 "num_blocks": 65536, 00:09:15.726 "uuid": "0117c8a3-55a3-4c03-8cf0-095081a2688c", 00:09:15.726 "assigned_rate_limits": { 00:09:15.726 "rw_ios_per_sec": 0, 00:09:15.726 "rw_mbytes_per_sec": 0, 00:09:15.726 "r_mbytes_per_sec": 0, 00:09:15.726 "w_mbytes_per_sec": 0 00:09:15.726 }, 00:09:15.726 "claimed": true, 00:09:15.726 "claim_type": "exclusive_write", 00:09:15.726 "zoned": false, 00:09:15.726 "supported_io_types": { 00:09:15.726 "read": true, 00:09:15.726 "write": true, 00:09:15.726 "unmap": true, 00:09:15.726 "flush": true, 00:09:15.726 "reset": true, 00:09:15.726 "nvme_admin": false, 00:09:15.726 "nvme_io": false, 00:09:15.726 "nvme_io_md": false, 00:09:15.726 "write_zeroes": true, 00:09:15.726 "zcopy": true, 00:09:15.726 "get_zone_info": false, 00:09:15.726 "zone_management": false, 00:09:15.726 "zone_append": false, 00:09:15.726 "compare": false, 00:09:15.726 "compare_and_write": false, 00:09:15.726 "abort": true, 00:09:15.726 "seek_hole": false, 00:09:15.726 "seek_data": false, 00:09:15.726 "copy": true, 00:09:15.726 "nvme_iov_md": false 00:09:15.726 }, 00:09:15.726 "memory_domains": [ 00:09:15.726 { 00:09:15.726 "dma_device_id": "system", 00:09:15.726 "dma_device_type": 1 00:09:15.726 }, 00:09:15.726 { 00:09:15.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.726 "dma_device_type": 2 00:09:15.726 } 00:09:15.726 ], 00:09:15.726 "driver_specific": {} 00:09:15.726 } 00:09:15.726 ] 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.726 "name": "Existed_Raid", 00:09:15.726 "uuid": "091d77f9-4519-42a9-abf4-cfa243698652", 00:09:15.726 "strip_size_kb": 64, 00:09:15.726 "state": "online", 00:09:15.726 "raid_level": "concat", 00:09:15.726 "superblock": true, 00:09:15.726 "num_base_bdevs": 3, 00:09:15.726 "num_base_bdevs_discovered": 3, 00:09:15.726 "num_base_bdevs_operational": 3, 00:09:15.726 "base_bdevs_list": [ 00:09:15.726 { 00:09:15.726 "name": "NewBaseBdev", 00:09:15.726 "uuid": "0117c8a3-55a3-4c03-8cf0-095081a2688c", 00:09:15.726 "is_configured": true, 00:09:15.726 "data_offset": 2048, 00:09:15.726 "data_size": 63488 00:09:15.726 }, 00:09:15.726 { 00:09:15.726 "name": "BaseBdev2", 00:09:15.726 "uuid": "05c77bb8-4f4a-420c-ab72-b50bdb661588", 00:09:15.726 "is_configured": true, 00:09:15.726 "data_offset": 2048, 00:09:15.726 "data_size": 63488 00:09:15.726 }, 00:09:15.726 { 00:09:15.726 "name": "BaseBdev3", 00:09:15.726 "uuid": "b16dad26-fdae-42f2-a4ca-d5a49e0bc9ec", 00:09:15.726 "is_configured": true, 00:09:15.726 "data_offset": 2048, 00:09:15.726 "data_size": 63488 00:09:15.726 } 00:09:15.726 ] 00:09:15.726 }' 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.726 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.013 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:16.013 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:16.013 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.013 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.013 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.013 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.013 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:16.013 04:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.013 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.013 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.013 [2024-11-21 04:07:15.963327] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.272 04:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.272 04:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.272 "name": "Existed_Raid", 00:09:16.272 "aliases": [ 00:09:16.272 "091d77f9-4519-42a9-abf4-cfa243698652" 00:09:16.272 ], 00:09:16.272 "product_name": "Raid Volume", 00:09:16.272 "block_size": 512, 00:09:16.272 "num_blocks": 190464, 00:09:16.272 "uuid": "091d77f9-4519-42a9-abf4-cfa243698652", 00:09:16.272 "assigned_rate_limits": { 00:09:16.272 "rw_ios_per_sec": 0, 00:09:16.272 "rw_mbytes_per_sec": 0, 00:09:16.272 "r_mbytes_per_sec": 0, 00:09:16.272 "w_mbytes_per_sec": 0 00:09:16.272 }, 00:09:16.272 "claimed": false, 00:09:16.272 "zoned": false, 00:09:16.272 "supported_io_types": { 00:09:16.272 "read": true, 00:09:16.272 "write": true, 00:09:16.272 "unmap": true, 00:09:16.272 "flush": true, 00:09:16.272 "reset": true, 00:09:16.272 "nvme_admin": false, 00:09:16.272 "nvme_io": false, 00:09:16.272 "nvme_io_md": false, 00:09:16.272 "write_zeroes": true, 00:09:16.272 "zcopy": false, 00:09:16.272 "get_zone_info": false, 00:09:16.272 "zone_management": false, 00:09:16.272 "zone_append": false, 00:09:16.272 "compare": false, 00:09:16.272 "compare_and_write": false, 00:09:16.272 "abort": false, 00:09:16.272 "seek_hole": false, 00:09:16.272 "seek_data": false, 00:09:16.272 "copy": false, 00:09:16.272 "nvme_iov_md": false 00:09:16.272 }, 00:09:16.272 "memory_domains": [ 00:09:16.272 { 00:09:16.272 "dma_device_id": "system", 00:09:16.272 "dma_device_type": 1 00:09:16.272 }, 00:09:16.272 { 00:09:16.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.272 "dma_device_type": 2 00:09:16.272 }, 00:09:16.272 { 00:09:16.272 "dma_device_id": "system", 00:09:16.272 "dma_device_type": 1 00:09:16.272 }, 00:09:16.272 { 00:09:16.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.272 "dma_device_type": 2 00:09:16.272 }, 00:09:16.272 { 00:09:16.272 "dma_device_id": "system", 00:09:16.272 "dma_device_type": 1 00:09:16.272 }, 00:09:16.273 { 00:09:16.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.273 "dma_device_type": 2 00:09:16.273 } 00:09:16.273 ], 00:09:16.273 "driver_specific": { 00:09:16.273 "raid": { 00:09:16.273 "uuid": "091d77f9-4519-42a9-abf4-cfa243698652", 00:09:16.273 "strip_size_kb": 64, 00:09:16.273 "state": "online", 00:09:16.273 "raid_level": "concat", 00:09:16.273 "superblock": true, 00:09:16.273 "num_base_bdevs": 3, 00:09:16.273 "num_base_bdevs_discovered": 3, 00:09:16.273 "num_base_bdevs_operational": 3, 00:09:16.273 "base_bdevs_list": [ 00:09:16.273 { 00:09:16.273 "name": "NewBaseBdev", 00:09:16.273 "uuid": "0117c8a3-55a3-4c03-8cf0-095081a2688c", 00:09:16.273 "is_configured": true, 00:09:16.273 "data_offset": 2048, 00:09:16.273 "data_size": 63488 00:09:16.273 }, 00:09:16.273 { 00:09:16.273 "name": "BaseBdev2", 00:09:16.273 "uuid": "05c77bb8-4f4a-420c-ab72-b50bdb661588", 00:09:16.273 "is_configured": true, 00:09:16.273 "data_offset": 2048, 00:09:16.273 "data_size": 63488 00:09:16.273 }, 00:09:16.273 { 00:09:16.273 "name": "BaseBdev3", 00:09:16.273 "uuid": "b16dad26-fdae-42f2-a4ca-d5a49e0bc9ec", 00:09:16.273 "is_configured": true, 00:09:16.273 "data_offset": 2048, 00:09:16.273 "data_size": 63488 00:09:16.273 } 00:09:16.273 ] 00:09:16.273 } 00:09:16.273 } 00:09:16.273 }' 00:09:16.273 04:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.273 04:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:16.273 BaseBdev2 00:09:16.273 BaseBdev3' 00:09:16.273 04:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.273 04:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.273 04:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.273 04:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:16.273 04:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.273 04:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.273 04:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.273 04:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.273 04:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.273 04:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.273 04:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.273 04:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:16.273 04:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.273 04:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.273 04:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.273 04:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.273 04:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.273 04:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.273 04:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.273 04:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:16.273 04:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.273 04:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.273 04:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.273 04:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.533 04:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.533 04:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.533 04:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:16.533 04:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.533 04:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.533 [2024-11-21 04:07:16.270457] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:16.533 [2024-11-21 04:07:16.270486] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.533 [2024-11-21 04:07:16.270586] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.533 [2024-11-21 04:07:16.270650] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.533 [2024-11-21 04:07:16.270664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:16.533 04:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.533 04:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77362 00:09:16.533 04:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77362 ']' 00:09:16.533 04:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 77362 00:09:16.533 04:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:16.533 04:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.533 04:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77362 00:09:16.533 04:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.533 04:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.533 04:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77362' 00:09:16.533 killing process with pid 77362 00:09:16.533 04:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 77362 00:09:16.533 [2024-11-21 04:07:16.322594] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:16.533 04:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 77362 00:09:16.533 [2024-11-21 04:07:16.381879] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:16.795 ************************************ 00:09:16.795 END TEST raid_state_function_test_sb 00:09:16.795 ************************************ 00:09:16.795 04:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:16.795 00:09:16.795 real 0m9.033s 00:09:16.795 user 0m15.086s 00:09:16.795 sys 0m1.965s 00:09:16.795 04:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.795 04:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.055 04:07:16 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:17.055 04:07:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:17.055 04:07:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.055 04:07:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:17.055 ************************************ 00:09:17.055 START TEST raid_superblock_test 00:09:17.055 ************************************ 00:09:17.055 04:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:17.055 04:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:17.055 04:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:17.055 04:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:17.055 04:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:17.055 04:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:17.055 04:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:17.055 04:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:17.055 04:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:17.055 04:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:17.055 04:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:17.055 04:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:17.055 04:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:17.055 04:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:17.055 04:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:17.055 04:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:17.055 04:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:17.055 04:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=77966 00:09:17.055 04:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:17.055 04:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 77966 00:09:17.055 04:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 77966 ']' 00:09:17.055 04:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.055 04:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.055 04:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.055 04:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.056 04:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.056 [2024-11-21 04:07:16.880569] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:09:17.056 [2024-11-21 04:07:16.881347] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77966 ] 00:09:17.056 [2024-11-21 04:07:17.015383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.315 [2024-11-21 04:07:17.054075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.315 [2024-11-21 04:07:17.130827] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.315 [2024-11-21 04:07:17.130981] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.886 malloc1 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.886 [2024-11-21 04:07:17.737323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:17.886 [2024-11-21 04:07:17.737450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.886 [2024-11-21 04:07:17.737522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:09:17.886 [2024-11-21 04:07:17.737581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.886 [2024-11-21 04:07:17.740055] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.886 [2024-11-21 04:07:17.740136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:17.886 pt1 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.886 malloc2 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.886 [2024-11-21 04:07:17.775905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:17.886 [2024-11-21 04:07:17.776005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.886 [2024-11-21 04:07:17.776047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:17.886 [2024-11-21 04:07:17.776086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.886 [2024-11-21 04:07:17.778482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.886 [2024-11-21 04:07:17.778558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:17.886 pt2 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:17.886 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.887 malloc3 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.887 [2024-11-21 04:07:17.814559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:17.887 [2024-11-21 04:07:17.814635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.887 [2024-11-21 04:07:17.814661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:17.887 [2024-11-21 04:07:17.814674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.887 [2024-11-21 04:07:17.817396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.887 [2024-11-21 04:07:17.817501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:17.887 pt3 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.887 [2024-11-21 04:07:17.826613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:17.887 [2024-11-21 04:07:17.828750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:17.887 [2024-11-21 04:07:17.828806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:17.887 [2024-11-21 04:07:17.828960] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:09:17.887 [2024-11-21 04:07:17.828975] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:17.887 [2024-11-21 04:07:17.829343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:17.887 [2024-11-21 04:07:17.829573] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:09:17.887 [2024-11-21 04:07:17.829626] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:09:17.887 [2024-11-21 04:07:17.829865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.887 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.147 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.147 "name": "raid_bdev1", 00:09:18.147 "uuid": "904fd9a4-90a4-4108-91c9-6dbb919c6ce4", 00:09:18.147 "strip_size_kb": 64, 00:09:18.147 "state": "online", 00:09:18.147 "raid_level": "concat", 00:09:18.147 "superblock": true, 00:09:18.147 "num_base_bdevs": 3, 00:09:18.147 "num_base_bdevs_discovered": 3, 00:09:18.147 "num_base_bdevs_operational": 3, 00:09:18.147 "base_bdevs_list": [ 00:09:18.147 { 00:09:18.147 "name": "pt1", 00:09:18.147 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.147 "is_configured": true, 00:09:18.147 "data_offset": 2048, 00:09:18.147 "data_size": 63488 00:09:18.147 }, 00:09:18.147 { 00:09:18.147 "name": "pt2", 00:09:18.147 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.147 "is_configured": true, 00:09:18.147 "data_offset": 2048, 00:09:18.147 "data_size": 63488 00:09:18.147 }, 00:09:18.147 { 00:09:18.147 "name": "pt3", 00:09:18.147 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.147 "is_configured": true, 00:09:18.147 "data_offset": 2048, 00:09:18.147 "data_size": 63488 00:09:18.147 } 00:09:18.147 ] 00:09:18.147 }' 00:09:18.147 04:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.147 04:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.407 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:18.407 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:18.407 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:18.407 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:18.407 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:18.407 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:18.407 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:18.407 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.407 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.407 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:18.407 [2024-11-21 04:07:18.306114] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.407 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.407 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:18.407 "name": "raid_bdev1", 00:09:18.407 "aliases": [ 00:09:18.407 "904fd9a4-90a4-4108-91c9-6dbb919c6ce4" 00:09:18.407 ], 00:09:18.407 "product_name": "Raid Volume", 00:09:18.407 "block_size": 512, 00:09:18.407 "num_blocks": 190464, 00:09:18.407 "uuid": "904fd9a4-90a4-4108-91c9-6dbb919c6ce4", 00:09:18.407 "assigned_rate_limits": { 00:09:18.407 "rw_ios_per_sec": 0, 00:09:18.407 "rw_mbytes_per_sec": 0, 00:09:18.407 "r_mbytes_per_sec": 0, 00:09:18.407 "w_mbytes_per_sec": 0 00:09:18.407 }, 00:09:18.407 "claimed": false, 00:09:18.407 "zoned": false, 00:09:18.407 "supported_io_types": { 00:09:18.407 "read": true, 00:09:18.407 "write": true, 00:09:18.407 "unmap": true, 00:09:18.407 "flush": true, 00:09:18.407 "reset": true, 00:09:18.407 "nvme_admin": false, 00:09:18.407 "nvme_io": false, 00:09:18.407 "nvme_io_md": false, 00:09:18.407 "write_zeroes": true, 00:09:18.407 "zcopy": false, 00:09:18.407 "get_zone_info": false, 00:09:18.407 "zone_management": false, 00:09:18.407 "zone_append": false, 00:09:18.407 "compare": false, 00:09:18.407 "compare_and_write": false, 00:09:18.407 "abort": false, 00:09:18.407 "seek_hole": false, 00:09:18.407 "seek_data": false, 00:09:18.407 "copy": false, 00:09:18.407 "nvme_iov_md": false 00:09:18.407 }, 00:09:18.407 "memory_domains": [ 00:09:18.407 { 00:09:18.407 "dma_device_id": "system", 00:09:18.407 "dma_device_type": 1 00:09:18.407 }, 00:09:18.407 { 00:09:18.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.407 "dma_device_type": 2 00:09:18.407 }, 00:09:18.407 { 00:09:18.407 "dma_device_id": "system", 00:09:18.407 "dma_device_type": 1 00:09:18.407 }, 00:09:18.407 { 00:09:18.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.407 "dma_device_type": 2 00:09:18.407 }, 00:09:18.407 { 00:09:18.407 "dma_device_id": "system", 00:09:18.407 "dma_device_type": 1 00:09:18.407 }, 00:09:18.407 { 00:09:18.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.407 "dma_device_type": 2 00:09:18.407 } 00:09:18.407 ], 00:09:18.407 "driver_specific": { 00:09:18.407 "raid": { 00:09:18.407 "uuid": "904fd9a4-90a4-4108-91c9-6dbb919c6ce4", 00:09:18.407 "strip_size_kb": 64, 00:09:18.407 "state": "online", 00:09:18.407 "raid_level": "concat", 00:09:18.407 "superblock": true, 00:09:18.407 "num_base_bdevs": 3, 00:09:18.407 "num_base_bdevs_discovered": 3, 00:09:18.407 "num_base_bdevs_operational": 3, 00:09:18.407 "base_bdevs_list": [ 00:09:18.407 { 00:09:18.407 "name": "pt1", 00:09:18.407 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.407 "is_configured": true, 00:09:18.407 "data_offset": 2048, 00:09:18.407 "data_size": 63488 00:09:18.407 }, 00:09:18.407 { 00:09:18.407 "name": "pt2", 00:09:18.407 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.407 "is_configured": true, 00:09:18.407 "data_offset": 2048, 00:09:18.407 "data_size": 63488 00:09:18.407 }, 00:09:18.407 { 00:09:18.407 "name": "pt3", 00:09:18.407 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.407 "is_configured": true, 00:09:18.407 "data_offset": 2048, 00:09:18.407 "data_size": 63488 00:09:18.407 } 00:09:18.407 ] 00:09:18.407 } 00:09:18.407 } 00:09:18.407 }' 00:09:18.407 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:18.666 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:18.666 pt2 00:09:18.666 pt3' 00:09:18.666 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.666 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:18.666 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.666 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:18.666 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.667 [2024-11-21 04:07:18.565603] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=904fd9a4-90a4-4108-91c9-6dbb919c6ce4 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 904fd9a4-90a4-4108-91c9-6dbb919c6ce4 ']' 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.667 [2024-11-21 04:07:18.609278] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:18.667 [2024-11-21 04:07:18.609347] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:18.667 [2024-11-21 04:07:18.609468] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:18.667 [2024-11-21 04:07:18.609569] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:18.667 [2024-11-21 04:07:18.609587] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:18.667 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.927 [2024-11-21 04:07:18.757039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:18.927 [2024-11-21 04:07:18.759304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:18.927 [2024-11-21 04:07:18.759417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:18.927 [2024-11-21 04:07:18.759484] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:18.927 [2024-11-21 04:07:18.759547] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:18.927 [2024-11-21 04:07:18.759584] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:18.927 [2024-11-21 04:07:18.759598] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:18.927 [2024-11-21 04:07:18.759609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:09:18.927 request: 00:09:18.927 { 00:09:18.927 "name": "raid_bdev1", 00:09:18.927 "raid_level": "concat", 00:09:18.927 "base_bdevs": [ 00:09:18.927 "malloc1", 00:09:18.927 "malloc2", 00:09:18.927 "malloc3" 00:09:18.927 ], 00:09:18.927 "strip_size_kb": 64, 00:09:18.927 "superblock": false, 00:09:18.927 "method": "bdev_raid_create", 00:09:18.927 "req_id": 1 00:09:18.927 } 00:09:18.927 Got JSON-RPC error response 00:09:18.927 response: 00:09:18.927 { 00:09:18.927 "code": -17, 00:09:18.927 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:18.927 } 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.927 [2024-11-21 04:07:18.820884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:18.927 [2024-11-21 04:07:18.820980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.927 [2024-11-21 04:07:18.821026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:18.927 [2024-11-21 04:07:18.821074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.927 [2024-11-21 04:07:18.823582] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.927 [2024-11-21 04:07:18.823656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:18.927 [2024-11-21 04:07:18.823768] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:18.927 [2024-11-21 04:07:18.823867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:18.927 pt1 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.927 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.928 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.928 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.928 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.928 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.928 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.928 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.928 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.928 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.928 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.928 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.928 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.928 "name": "raid_bdev1", 00:09:18.928 "uuid": "904fd9a4-90a4-4108-91c9-6dbb919c6ce4", 00:09:18.928 "strip_size_kb": 64, 00:09:18.928 "state": "configuring", 00:09:18.928 "raid_level": "concat", 00:09:18.928 "superblock": true, 00:09:18.928 "num_base_bdevs": 3, 00:09:18.928 "num_base_bdevs_discovered": 1, 00:09:18.928 "num_base_bdevs_operational": 3, 00:09:18.928 "base_bdevs_list": [ 00:09:18.928 { 00:09:18.928 "name": "pt1", 00:09:18.928 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.928 "is_configured": true, 00:09:18.928 "data_offset": 2048, 00:09:18.928 "data_size": 63488 00:09:18.928 }, 00:09:18.928 { 00:09:18.928 "name": null, 00:09:18.928 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.928 "is_configured": false, 00:09:18.928 "data_offset": 2048, 00:09:18.928 "data_size": 63488 00:09:18.928 }, 00:09:18.928 { 00:09:18.928 "name": null, 00:09:18.928 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.928 "is_configured": false, 00:09:18.928 "data_offset": 2048, 00:09:18.928 "data_size": 63488 00:09:18.928 } 00:09:18.928 ] 00:09:18.928 }' 00:09:18.928 04:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.928 04:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.497 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:19.497 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:19.497 04:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.497 04:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.497 [2024-11-21 04:07:19.232194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:19.497 [2024-11-21 04:07:19.232278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.497 [2024-11-21 04:07:19.232301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:19.497 [2024-11-21 04:07:19.232316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.497 [2024-11-21 04:07:19.232781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.497 [2024-11-21 04:07:19.232811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:19.497 [2024-11-21 04:07:19.232882] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:19.497 [2024-11-21 04:07:19.232909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:19.497 pt2 00:09:19.497 04:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.497 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:19.497 04:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.497 04:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.497 [2024-11-21 04:07:19.240204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:19.497 04:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.497 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:19.497 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.497 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.497 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.497 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.497 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.498 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.498 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.498 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.498 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.498 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.498 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.498 04:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.498 04:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.498 04:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.498 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.498 "name": "raid_bdev1", 00:09:19.498 "uuid": "904fd9a4-90a4-4108-91c9-6dbb919c6ce4", 00:09:19.498 "strip_size_kb": 64, 00:09:19.498 "state": "configuring", 00:09:19.498 "raid_level": "concat", 00:09:19.498 "superblock": true, 00:09:19.498 "num_base_bdevs": 3, 00:09:19.498 "num_base_bdevs_discovered": 1, 00:09:19.498 "num_base_bdevs_operational": 3, 00:09:19.498 "base_bdevs_list": [ 00:09:19.498 { 00:09:19.498 "name": "pt1", 00:09:19.498 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:19.498 "is_configured": true, 00:09:19.498 "data_offset": 2048, 00:09:19.498 "data_size": 63488 00:09:19.498 }, 00:09:19.498 { 00:09:19.498 "name": null, 00:09:19.498 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.498 "is_configured": false, 00:09:19.498 "data_offset": 0, 00:09:19.498 "data_size": 63488 00:09:19.498 }, 00:09:19.498 { 00:09:19.498 "name": null, 00:09:19.498 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:19.498 "is_configured": false, 00:09:19.498 "data_offset": 2048, 00:09:19.498 "data_size": 63488 00:09:19.498 } 00:09:19.498 ] 00:09:19.498 }' 00:09:19.498 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.498 04:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.758 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:19.758 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:19.758 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:19.758 04:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.758 04:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.758 [2024-11-21 04:07:19.679492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:19.758 [2024-11-21 04:07:19.679627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.758 [2024-11-21 04:07:19.679696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:19.758 [2024-11-21 04:07:19.679734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.758 [2024-11-21 04:07:19.680297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.758 [2024-11-21 04:07:19.680362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:19.758 [2024-11-21 04:07:19.680512] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:19.758 [2024-11-21 04:07:19.680586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:19.758 pt2 00:09:19.758 04:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.758 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:19.758 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:19.758 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:19.758 04:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.758 04:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.758 [2024-11-21 04:07:19.691449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:19.758 [2024-11-21 04:07:19.691551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.758 [2024-11-21 04:07:19.691605] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:19.758 [2024-11-21 04:07:19.691629] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.759 [2024-11-21 04:07:19.692004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.759 [2024-11-21 04:07:19.692033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:19.759 [2024-11-21 04:07:19.692095] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:19.759 [2024-11-21 04:07:19.692112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:19.759 [2024-11-21 04:07:19.692213] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:19.759 [2024-11-21 04:07:19.692241] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:19.759 [2024-11-21 04:07:19.692521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:19.759 [2024-11-21 04:07:19.692680] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:19.759 [2024-11-21 04:07:19.692703] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:09:19.759 [2024-11-21 04:07:19.692810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.759 pt3 00:09:19.759 04:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.759 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:19.759 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:19.759 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:19.759 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.759 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.759 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.759 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.759 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.759 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.759 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.759 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.759 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.759 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.759 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.759 04:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.759 04:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.759 04:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.018 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.018 "name": "raid_bdev1", 00:09:20.018 "uuid": "904fd9a4-90a4-4108-91c9-6dbb919c6ce4", 00:09:20.018 "strip_size_kb": 64, 00:09:20.018 "state": "online", 00:09:20.018 "raid_level": "concat", 00:09:20.018 "superblock": true, 00:09:20.018 "num_base_bdevs": 3, 00:09:20.018 "num_base_bdevs_discovered": 3, 00:09:20.018 "num_base_bdevs_operational": 3, 00:09:20.018 "base_bdevs_list": [ 00:09:20.018 { 00:09:20.018 "name": "pt1", 00:09:20.018 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.018 "is_configured": true, 00:09:20.018 "data_offset": 2048, 00:09:20.018 "data_size": 63488 00:09:20.018 }, 00:09:20.018 { 00:09:20.018 "name": "pt2", 00:09:20.018 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.018 "is_configured": true, 00:09:20.018 "data_offset": 2048, 00:09:20.018 "data_size": 63488 00:09:20.018 }, 00:09:20.018 { 00:09:20.018 "name": "pt3", 00:09:20.018 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.018 "is_configured": true, 00:09:20.018 "data_offset": 2048, 00:09:20.018 "data_size": 63488 00:09:20.018 } 00:09:20.018 ] 00:09:20.018 }' 00:09:20.018 04:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.018 04:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.277 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:20.277 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:20.277 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:20.277 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:20.277 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:20.277 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:20.277 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:20.277 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:20.277 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.277 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.277 [2024-11-21 04:07:20.167022] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.277 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.277 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:20.277 "name": "raid_bdev1", 00:09:20.277 "aliases": [ 00:09:20.277 "904fd9a4-90a4-4108-91c9-6dbb919c6ce4" 00:09:20.277 ], 00:09:20.277 "product_name": "Raid Volume", 00:09:20.277 "block_size": 512, 00:09:20.277 "num_blocks": 190464, 00:09:20.277 "uuid": "904fd9a4-90a4-4108-91c9-6dbb919c6ce4", 00:09:20.277 "assigned_rate_limits": { 00:09:20.277 "rw_ios_per_sec": 0, 00:09:20.277 "rw_mbytes_per_sec": 0, 00:09:20.277 "r_mbytes_per_sec": 0, 00:09:20.277 "w_mbytes_per_sec": 0 00:09:20.277 }, 00:09:20.278 "claimed": false, 00:09:20.278 "zoned": false, 00:09:20.278 "supported_io_types": { 00:09:20.278 "read": true, 00:09:20.278 "write": true, 00:09:20.278 "unmap": true, 00:09:20.278 "flush": true, 00:09:20.278 "reset": true, 00:09:20.278 "nvme_admin": false, 00:09:20.278 "nvme_io": false, 00:09:20.278 "nvme_io_md": false, 00:09:20.278 "write_zeroes": true, 00:09:20.278 "zcopy": false, 00:09:20.278 "get_zone_info": false, 00:09:20.278 "zone_management": false, 00:09:20.278 "zone_append": false, 00:09:20.278 "compare": false, 00:09:20.278 "compare_and_write": false, 00:09:20.278 "abort": false, 00:09:20.278 "seek_hole": false, 00:09:20.278 "seek_data": false, 00:09:20.278 "copy": false, 00:09:20.278 "nvme_iov_md": false 00:09:20.278 }, 00:09:20.278 "memory_domains": [ 00:09:20.278 { 00:09:20.278 "dma_device_id": "system", 00:09:20.278 "dma_device_type": 1 00:09:20.278 }, 00:09:20.278 { 00:09:20.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.278 "dma_device_type": 2 00:09:20.278 }, 00:09:20.278 { 00:09:20.278 "dma_device_id": "system", 00:09:20.278 "dma_device_type": 1 00:09:20.278 }, 00:09:20.278 { 00:09:20.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.278 "dma_device_type": 2 00:09:20.278 }, 00:09:20.278 { 00:09:20.278 "dma_device_id": "system", 00:09:20.278 "dma_device_type": 1 00:09:20.278 }, 00:09:20.278 { 00:09:20.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.278 "dma_device_type": 2 00:09:20.278 } 00:09:20.278 ], 00:09:20.278 "driver_specific": { 00:09:20.278 "raid": { 00:09:20.278 "uuid": "904fd9a4-90a4-4108-91c9-6dbb919c6ce4", 00:09:20.278 "strip_size_kb": 64, 00:09:20.278 "state": "online", 00:09:20.278 "raid_level": "concat", 00:09:20.278 "superblock": true, 00:09:20.278 "num_base_bdevs": 3, 00:09:20.278 "num_base_bdevs_discovered": 3, 00:09:20.278 "num_base_bdevs_operational": 3, 00:09:20.278 "base_bdevs_list": [ 00:09:20.278 { 00:09:20.278 "name": "pt1", 00:09:20.278 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.278 "is_configured": true, 00:09:20.278 "data_offset": 2048, 00:09:20.278 "data_size": 63488 00:09:20.278 }, 00:09:20.278 { 00:09:20.278 "name": "pt2", 00:09:20.278 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.278 "is_configured": true, 00:09:20.278 "data_offset": 2048, 00:09:20.278 "data_size": 63488 00:09:20.278 }, 00:09:20.278 { 00:09:20.278 "name": "pt3", 00:09:20.278 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.278 "is_configured": true, 00:09:20.278 "data_offset": 2048, 00:09:20.278 "data_size": 63488 00:09:20.278 } 00:09:20.278 ] 00:09:20.278 } 00:09:20.278 } 00:09:20.278 }' 00:09:20.278 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:20.537 pt2 00:09:20.537 pt3' 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.537 [2024-11-21 04:07:20.430509] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 904fd9a4-90a4-4108-91c9-6dbb919c6ce4 '!=' 904fd9a4-90a4-4108-91c9-6dbb919c6ce4 ']' 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 77966 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 77966 ']' 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 77966 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77966 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.537 killing process with pid 77966 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77966' 00:09:20.537 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 77966 00:09:20.537 [2024-11-21 04:07:20.488600] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:20.538 [2024-11-21 04:07:20.488715] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.538 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 77966 00:09:20.538 [2024-11-21 04:07:20.488797] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.538 [2024-11-21 04:07:20.488807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:09:20.797 [2024-11-21 04:07:20.551727] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:21.057 04:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:21.057 00:09:21.057 real 0m4.090s 00:09:21.057 user 0m6.288s 00:09:21.057 sys 0m0.945s 00:09:21.057 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.057 ************************************ 00:09:21.057 END TEST raid_superblock_test 00:09:21.057 ************************************ 00:09:21.057 04:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.057 04:07:20 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:21.057 04:07:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:21.057 04:07:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.057 04:07:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:21.057 ************************************ 00:09:21.057 START TEST raid_read_error_test 00:09:21.057 ************************************ 00:09:21.057 04:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:21.057 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:21.057 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:21.057 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:21.057 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:21.057 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.057 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:21.057 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.057 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.057 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:21.057 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.057 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.057 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:21.057 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.057 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.057 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:21.057 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:21.057 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:21.057 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:21.057 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:21.058 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:21.058 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:21.058 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:21.058 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:21.058 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:21.058 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:21.058 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WKU0zds2RO 00:09:21.058 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78208 00:09:21.058 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:21.058 04:07:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78208 00:09:21.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.058 04:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 78208 ']' 00:09:21.058 04:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.058 04:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.058 04:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.058 04:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.058 04:07:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.317 [2024-11-21 04:07:21.062032] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:09:21.317 [2024-11-21 04:07:21.062166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78208 ] 00:09:21.318 [2024-11-21 04:07:21.196769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.318 [2024-11-21 04:07:21.235724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.578 [2024-11-21 04:07:21.312156] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.578 [2024-11-21 04:07:21.312191] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.149 BaseBdev1_malloc 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.149 true 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.149 [2024-11-21 04:07:21.918167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:22.149 [2024-11-21 04:07:21.918251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.149 [2024-11-21 04:07:21.918281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:22.149 [2024-11-21 04:07:21.918292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.149 [2024-11-21 04:07:21.920767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.149 [2024-11-21 04:07:21.920806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:22.149 BaseBdev1 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.149 BaseBdev2_malloc 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.149 true 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.149 [2024-11-21 04:07:21.964959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:22.149 [2024-11-21 04:07:21.965011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.149 [2024-11-21 04:07:21.965030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:22.149 [2024-11-21 04:07:21.965048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.149 [2024-11-21 04:07:21.967502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.149 [2024-11-21 04:07:21.967539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:22.149 BaseBdev2 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.149 BaseBdev3_malloc 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.149 04:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.149 true 00:09:22.149 04:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.149 04:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:22.149 04:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.149 04:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.149 [2024-11-21 04:07:22.011939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:22.149 [2024-11-21 04:07:22.012064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.149 [2024-11-21 04:07:22.012093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:22.149 [2024-11-21 04:07:22.012102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.150 [2024-11-21 04:07:22.014586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.150 [2024-11-21 04:07:22.014620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:22.150 BaseBdev3 00:09:22.150 04:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.150 04:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:22.150 04:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.150 04:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.150 [2024-11-21 04:07:22.024016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.150 [2024-11-21 04:07:22.026185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:22.150 [2024-11-21 04:07:22.026340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:22.150 [2024-11-21 04:07:22.026557] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:22.150 [2024-11-21 04:07:22.026577] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:22.150 [2024-11-21 04:07:22.026835] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:09:22.150 [2024-11-21 04:07:22.026963] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:22.150 [2024-11-21 04:07:22.026972] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:09:22.150 [2024-11-21 04:07:22.027095] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.150 04:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.150 04:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:22.150 04:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.150 04:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.150 04:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.150 04:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.150 04:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.150 04:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.150 04:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.150 04:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.150 04:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.150 04:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.150 04:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.150 04:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.150 04:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.150 04:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.150 04:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.150 "name": "raid_bdev1", 00:09:22.150 "uuid": "2c3f15c8-f6c5-4109-a2a5-0c8e8510ccc9", 00:09:22.150 "strip_size_kb": 64, 00:09:22.150 "state": "online", 00:09:22.150 "raid_level": "concat", 00:09:22.150 "superblock": true, 00:09:22.150 "num_base_bdevs": 3, 00:09:22.150 "num_base_bdevs_discovered": 3, 00:09:22.150 "num_base_bdevs_operational": 3, 00:09:22.150 "base_bdevs_list": [ 00:09:22.150 { 00:09:22.150 "name": "BaseBdev1", 00:09:22.150 "uuid": "5a9f7005-ab72-591d-a1e7-2fcdc64e5f11", 00:09:22.150 "is_configured": true, 00:09:22.150 "data_offset": 2048, 00:09:22.150 "data_size": 63488 00:09:22.150 }, 00:09:22.150 { 00:09:22.150 "name": "BaseBdev2", 00:09:22.150 "uuid": "03d09f9c-2fea-517b-a15c-007f75806716", 00:09:22.150 "is_configured": true, 00:09:22.150 "data_offset": 2048, 00:09:22.150 "data_size": 63488 00:09:22.150 }, 00:09:22.150 { 00:09:22.150 "name": "BaseBdev3", 00:09:22.150 "uuid": "36cdba5f-8dab-5850-aff1-66b46dd70ab7", 00:09:22.150 "is_configured": true, 00:09:22.150 "data_offset": 2048, 00:09:22.150 "data_size": 63488 00:09:22.150 } 00:09:22.150 ] 00:09:22.150 }' 00:09:22.150 04:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.150 04:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.719 04:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:22.719 04:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:22.719 [2024-11-21 04:07:22.547684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002d50 00:09:23.656 04:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:23.656 04:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.656 04:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.656 04:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.656 04:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:23.656 04:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:23.656 04:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:23.656 04:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:23.656 04:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:23.656 04:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.656 04:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.656 04:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.656 04:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.656 04:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.656 04:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.656 04:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.656 04:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.656 04:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.656 04:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:23.656 04:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.656 04:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.656 04:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.656 04:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.656 "name": "raid_bdev1", 00:09:23.656 "uuid": "2c3f15c8-f6c5-4109-a2a5-0c8e8510ccc9", 00:09:23.656 "strip_size_kb": 64, 00:09:23.656 "state": "online", 00:09:23.656 "raid_level": "concat", 00:09:23.656 "superblock": true, 00:09:23.656 "num_base_bdevs": 3, 00:09:23.656 "num_base_bdevs_discovered": 3, 00:09:23.656 "num_base_bdevs_operational": 3, 00:09:23.656 "base_bdevs_list": [ 00:09:23.656 { 00:09:23.656 "name": "BaseBdev1", 00:09:23.656 "uuid": "5a9f7005-ab72-591d-a1e7-2fcdc64e5f11", 00:09:23.656 "is_configured": true, 00:09:23.656 "data_offset": 2048, 00:09:23.656 "data_size": 63488 00:09:23.656 }, 00:09:23.656 { 00:09:23.656 "name": "BaseBdev2", 00:09:23.656 "uuid": "03d09f9c-2fea-517b-a15c-007f75806716", 00:09:23.656 "is_configured": true, 00:09:23.656 "data_offset": 2048, 00:09:23.656 "data_size": 63488 00:09:23.656 }, 00:09:23.656 { 00:09:23.656 "name": "BaseBdev3", 00:09:23.656 "uuid": "36cdba5f-8dab-5850-aff1-66b46dd70ab7", 00:09:23.656 "is_configured": true, 00:09:23.656 "data_offset": 2048, 00:09:23.656 "data_size": 63488 00:09:23.656 } 00:09:23.656 ] 00:09:23.656 }' 00:09:23.656 04:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.656 04:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.288 04:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:24.288 04:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.288 04:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.288 [2024-11-21 04:07:23.961404] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:24.288 [2024-11-21 04:07:23.961443] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:24.288 [2024-11-21 04:07:23.964164] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:24.288 [2024-11-21 04:07:23.964228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.288 [2024-11-21 04:07:23.964271] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:24.288 [2024-11-21 04:07:23.964284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:09:24.288 { 00:09:24.288 "results": [ 00:09:24.288 { 00:09:24.288 "job": "raid_bdev1", 00:09:24.288 "core_mask": "0x1", 00:09:24.288 "workload": "randrw", 00:09:24.288 "percentage": 50, 00:09:24.288 "status": "finished", 00:09:24.288 "queue_depth": 1, 00:09:24.288 "io_size": 131072, 00:09:24.288 "runtime": 1.414147, 00:09:24.288 "iops": 14275.036470748797, 00:09:24.288 "mibps": 1784.3795588435996, 00:09:24.288 "io_failed": 1, 00:09:24.288 "io_timeout": 0, 00:09:24.288 "avg_latency_us": 98.29187471825972, 00:09:24.288 "min_latency_us": 26.606113537117903, 00:09:24.288 "max_latency_us": 1380.8349344978167 00:09:24.288 } 00:09:24.288 ], 00:09:24.288 "core_count": 1 00:09:24.288 } 00:09:24.288 04:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.288 04:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78208 00:09:24.288 04:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 78208 ']' 00:09:24.288 04:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 78208 00:09:24.288 04:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:24.288 04:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.288 04:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78208 00:09:24.288 04:07:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.288 04:07:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.288 killing process with pid 78208 00:09:24.288 04:07:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78208' 00:09:24.288 04:07:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 78208 00:09:24.288 [2024-11-21 04:07:24.012714] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:24.288 04:07:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 78208 00:09:24.288 [2024-11-21 04:07:24.061963] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:24.548 04:07:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WKU0zds2RO 00:09:24.548 04:07:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:24.548 04:07:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:24.548 04:07:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:24.548 04:07:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:24.548 04:07:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:24.548 04:07:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:24.548 04:07:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:24.548 00:09:24.548 real 0m3.446s 00:09:24.548 user 0m4.227s 00:09:24.548 sys 0m0.643s 00:09:24.548 04:07:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.548 ************************************ 00:09:24.548 END TEST raid_read_error_test 00:09:24.548 ************************************ 00:09:24.548 04:07:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.548 04:07:24 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:24.548 04:07:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:24.548 04:07:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.548 04:07:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:24.548 ************************************ 00:09:24.548 START TEST raid_write_error_test 00:09:24.548 ************************************ 00:09:24.548 04:07:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:24.548 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:24.548 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:24.548 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:24.548 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:24.548 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.548 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:24.548 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:24.548 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.i5FB7nLFgS 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78338 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78338 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 78338 ']' 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.549 04:07:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.808 [2024-11-21 04:07:24.576265] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:09:24.808 [2024-11-21 04:07:24.576395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78338 ] 00:09:24.808 [2024-11-21 04:07:24.731063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.808 [2024-11-21 04:07:24.773138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.067 [2024-11-21 04:07:24.853143] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.067 [2024-11-21 04:07:24.853190] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.635 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:25.635 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:25.635 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:25.635 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:25.635 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.635 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.635 BaseBdev1_malloc 00:09:25.635 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.635 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:25.635 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.635 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.635 true 00:09:25.635 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.635 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:25.635 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.635 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.636 [2024-11-21 04:07:25.432989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:25.636 [2024-11-21 04:07:25.433069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.636 [2024-11-21 04:07:25.433097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:25.636 [2024-11-21 04:07:25.433107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.636 [2024-11-21 04:07:25.435703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.636 [2024-11-21 04:07:25.435799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:25.636 BaseBdev1 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.636 BaseBdev2_malloc 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.636 true 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.636 [2024-11-21 04:07:25.480087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:25.636 [2024-11-21 04:07:25.480137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.636 [2024-11-21 04:07:25.480156] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:25.636 [2024-11-21 04:07:25.480174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.636 [2024-11-21 04:07:25.482771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.636 [2024-11-21 04:07:25.482811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:25.636 BaseBdev2 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.636 BaseBdev3_malloc 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.636 true 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.636 [2024-11-21 04:07:25.527358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:25.636 [2024-11-21 04:07:25.527470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.636 [2024-11-21 04:07:25.527499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:25.636 [2024-11-21 04:07:25.527509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.636 [2024-11-21 04:07:25.529969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.636 [2024-11-21 04:07:25.530005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:25.636 BaseBdev3 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.636 [2024-11-21 04:07:25.539438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:25.636 [2024-11-21 04:07:25.541661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:25.636 [2024-11-21 04:07:25.541739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:25.636 [2024-11-21 04:07:25.541929] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:25.636 [2024-11-21 04:07:25.541953] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:25.636 [2024-11-21 04:07:25.542221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:09:25.636 [2024-11-21 04:07:25.542382] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:25.636 [2024-11-21 04:07:25.542392] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:09:25.636 [2024-11-21 04:07:25.542509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.636 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.636 "name": "raid_bdev1", 00:09:25.636 "uuid": "ecfadf37-49d8-4d1d-b38a-5606cd2bcc79", 00:09:25.636 "strip_size_kb": 64, 00:09:25.636 "state": "online", 00:09:25.636 "raid_level": "concat", 00:09:25.636 "superblock": true, 00:09:25.636 "num_base_bdevs": 3, 00:09:25.636 "num_base_bdevs_discovered": 3, 00:09:25.636 "num_base_bdevs_operational": 3, 00:09:25.636 "base_bdevs_list": [ 00:09:25.636 { 00:09:25.636 "name": "BaseBdev1", 00:09:25.636 "uuid": "f1d4d1d8-463c-5974-9bb9-232b50710a1b", 00:09:25.636 "is_configured": true, 00:09:25.636 "data_offset": 2048, 00:09:25.636 "data_size": 63488 00:09:25.636 }, 00:09:25.636 { 00:09:25.636 "name": "BaseBdev2", 00:09:25.636 "uuid": "1135aef7-e483-5727-a2d4-cf95a0bbc318", 00:09:25.636 "is_configured": true, 00:09:25.636 "data_offset": 2048, 00:09:25.636 "data_size": 63488 00:09:25.636 }, 00:09:25.636 { 00:09:25.637 "name": "BaseBdev3", 00:09:25.637 "uuid": "c48e4b6f-8480-5db1-b5fa-5c6f48961a02", 00:09:25.637 "is_configured": true, 00:09:25.637 "data_offset": 2048, 00:09:25.637 "data_size": 63488 00:09:25.637 } 00:09:25.637 ] 00:09:25.637 }' 00:09:25.637 04:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.637 04:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.204 04:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:26.204 04:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:26.204 [2024-11-21 04:07:26.103101] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002d50 00:09:27.141 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:27.141 04:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.141 04:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.141 04:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.141 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:27.141 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:27.141 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:27.141 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:27.141 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:27.141 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.141 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.141 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.141 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.141 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.141 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.141 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.141 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.141 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.141 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.141 04:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.141 04:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.141 04:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.141 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.141 "name": "raid_bdev1", 00:09:27.141 "uuid": "ecfadf37-49d8-4d1d-b38a-5606cd2bcc79", 00:09:27.141 "strip_size_kb": 64, 00:09:27.141 "state": "online", 00:09:27.142 "raid_level": "concat", 00:09:27.142 "superblock": true, 00:09:27.142 "num_base_bdevs": 3, 00:09:27.142 "num_base_bdevs_discovered": 3, 00:09:27.142 "num_base_bdevs_operational": 3, 00:09:27.142 "base_bdevs_list": [ 00:09:27.142 { 00:09:27.142 "name": "BaseBdev1", 00:09:27.142 "uuid": "f1d4d1d8-463c-5974-9bb9-232b50710a1b", 00:09:27.142 "is_configured": true, 00:09:27.142 "data_offset": 2048, 00:09:27.142 "data_size": 63488 00:09:27.142 }, 00:09:27.142 { 00:09:27.142 "name": "BaseBdev2", 00:09:27.142 "uuid": "1135aef7-e483-5727-a2d4-cf95a0bbc318", 00:09:27.142 "is_configured": true, 00:09:27.142 "data_offset": 2048, 00:09:27.142 "data_size": 63488 00:09:27.142 }, 00:09:27.142 { 00:09:27.142 "name": "BaseBdev3", 00:09:27.142 "uuid": "c48e4b6f-8480-5db1-b5fa-5c6f48961a02", 00:09:27.142 "is_configured": true, 00:09:27.142 "data_offset": 2048, 00:09:27.142 "data_size": 63488 00:09:27.142 } 00:09:27.142 ] 00:09:27.142 }' 00:09:27.142 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.142 04:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.708 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:27.708 04:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.708 04:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.708 [2024-11-21 04:07:27.463767] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:27.708 [2024-11-21 04:07:27.463872] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:27.708 [2024-11-21 04:07:27.466631] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.708 [2024-11-21 04:07:27.466757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.708 [2024-11-21 04:07:27.466849] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:27.708 [2024-11-21 04:07:27.466943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:09:27.708 { 00:09:27.708 "results": [ 00:09:27.708 { 00:09:27.708 "job": "raid_bdev1", 00:09:27.708 "core_mask": "0x1", 00:09:27.708 "workload": "randrw", 00:09:27.708 "percentage": 50, 00:09:27.708 "status": "finished", 00:09:27.708 "queue_depth": 1, 00:09:27.708 "io_size": 131072, 00:09:27.708 "runtime": 1.361213, 00:09:27.708 "iops": 14335.743193754393, 00:09:27.708 "mibps": 1791.9678992192992, 00:09:27.708 "io_failed": 1, 00:09:27.708 "io_timeout": 0, 00:09:27.708 "avg_latency_us": 97.86069423699382, 00:09:27.708 "min_latency_us": 26.1589519650655, 00:09:27.708 "max_latency_us": 1402.2986899563318 00:09:27.708 } 00:09:27.708 ], 00:09:27.708 "core_count": 1 00:09:27.708 } 00:09:27.708 04:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.708 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78338 00:09:27.708 04:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 78338 ']' 00:09:27.708 04:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 78338 00:09:27.708 04:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:27.708 04:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.708 04:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78338 00:09:27.708 killing process with pid 78338 00:09:27.708 04:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:27.708 04:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:27.708 04:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78338' 00:09:27.708 04:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 78338 00:09:27.708 [2024-11-21 04:07:27.513347] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:27.708 04:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 78338 00:09:27.708 [2024-11-21 04:07:27.563149] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:27.968 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.i5FB7nLFgS 00:09:27.968 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:27.968 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:27.968 ************************************ 00:09:27.968 END TEST raid_write_error_test 00:09:27.968 ************************************ 00:09:27.968 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:27.968 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:27.968 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:27.968 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:27.968 04:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:27.968 00:09:27.968 real 0m3.426s 00:09:27.968 user 0m4.248s 00:09:27.968 sys 0m0.597s 00:09:27.968 04:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.968 04:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.227 04:07:27 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:28.227 04:07:27 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:28.227 04:07:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:28.227 04:07:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.227 04:07:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:28.227 ************************************ 00:09:28.227 START TEST raid_state_function_test 00:09:28.227 ************************************ 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78469 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78469' 00:09:28.227 Process raid pid: 78469 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78469 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 78469 ']' 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.227 04:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.227 [2024-11-21 04:07:28.068684] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:09:28.227 [2024-11-21 04:07:28.068898] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.486 [2024-11-21 04:07:28.227025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.486 [2024-11-21 04:07:28.265784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.486 [2024-11-21 04:07:28.342591] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.486 [2024-11-21 04:07:28.342736] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.054 04:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.054 04:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:29.054 04:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:29.054 04:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.054 04:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.054 [2024-11-21 04:07:28.894342] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:29.054 [2024-11-21 04:07:28.894411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:29.054 [2024-11-21 04:07:28.894422] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:29.054 [2024-11-21 04:07:28.894432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:29.054 [2024-11-21 04:07:28.894438] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:29.054 [2024-11-21 04:07:28.894451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:29.054 04:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.054 04:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:29.054 04:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.054 04:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.054 04:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.054 04:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.054 04:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.054 04:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.054 04:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.054 04:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.054 04:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.054 04:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.054 04:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.054 04:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.054 04:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.054 04:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.054 04:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.054 "name": "Existed_Raid", 00:09:29.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.054 "strip_size_kb": 0, 00:09:29.054 "state": "configuring", 00:09:29.054 "raid_level": "raid1", 00:09:29.054 "superblock": false, 00:09:29.054 "num_base_bdevs": 3, 00:09:29.054 "num_base_bdevs_discovered": 0, 00:09:29.054 "num_base_bdevs_operational": 3, 00:09:29.054 "base_bdevs_list": [ 00:09:29.054 { 00:09:29.054 "name": "BaseBdev1", 00:09:29.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.054 "is_configured": false, 00:09:29.054 "data_offset": 0, 00:09:29.054 "data_size": 0 00:09:29.054 }, 00:09:29.054 { 00:09:29.054 "name": "BaseBdev2", 00:09:29.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.054 "is_configured": false, 00:09:29.054 "data_offset": 0, 00:09:29.054 "data_size": 0 00:09:29.054 }, 00:09:29.054 { 00:09:29.054 "name": "BaseBdev3", 00:09:29.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.054 "is_configured": false, 00:09:29.054 "data_offset": 0, 00:09:29.054 "data_size": 0 00:09:29.054 } 00:09:29.054 ] 00:09:29.054 }' 00:09:29.054 04:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.054 04:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.621 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:29.621 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.621 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.621 [2024-11-21 04:07:29.297596] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:29.621 [2024-11-21 04:07:29.297705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:29.621 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.621 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:29.621 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.621 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.621 [2024-11-21 04:07:29.309564] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:29.621 [2024-11-21 04:07:29.309608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:29.621 [2024-11-21 04:07:29.309617] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:29.621 [2024-11-21 04:07:29.309627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:29.621 [2024-11-21 04:07:29.309632] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:29.621 [2024-11-21 04:07:29.309641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:29.621 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.621 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:29.621 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.621 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.621 [2024-11-21 04:07:29.336521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.621 BaseBdev1 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.622 [ 00:09:29.622 { 00:09:29.622 "name": "BaseBdev1", 00:09:29.622 "aliases": [ 00:09:29.622 "7d49e4f1-68ec-4f38-ad44-7af10772aeed" 00:09:29.622 ], 00:09:29.622 "product_name": "Malloc disk", 00:09:29.622 "block_size": 512, 00:09:29.622 "num_blocks": 65536, 00:09:29.622 "uuid": "7d49e4f1-68ec-4f38-ad44-7af10772aeed", 00:09:29.622 "assigned_rate_limits": { 00:09:29.622 "rw_ios_per_sec": 0, 00:09:29.622 "rw_mbytes_per_sec": 0, 00:09:29.622 "r_mbytes_per_sec": 0, 00:09:29.622 "w_mbytes_per_sec": 0 00:09:29.622 }, 00:09:29.622 "claimed": true, 00:09:29.622 "claim_type": "exclusive_write", 00:09:29.622 "zoned": false, 00:09:29.622 "supported_io_types": { 00:09:29.622 "read": true, 00:09:29.622 "write": true, 00:09:29.622 "unmap": true, 00:09:29.622 "flush": true, 00:09:29.622 "reset": true, 00:09:29.622 "nvme_admin": false, 00:09:29.622 "nvme_io": false, 00:09:29.622 "nvme_io_md": false, 00:09:29.622 "write_zeroes": true, 00:09:29.622 "zcopy": true, 00:09:29.622 "get_zone_info": false, 00:09:29.622 "zone_management": false, 00:09:29.622 "zone_append": false, 00:09:29.622 "compare": false, 00:09:29.622 "compare_and_write": false, 00:09:29.622 "abort": true, 00:09:29.622 "seek_hole": false, 00:09:29.622 "seek_data": false, 00:09:29.622 "copy": true, 00:09:29.622 "nvme_iov_md": false 00:09:29.622 }, 00:09:29.622 "memory_domains": [ 00:09:29.622 { 00:09:29.622 "dma_device_id": "system", 00:09:29.622 "dma_device_type": 1 00:09:29.622 }, 00:09:29.622 { 00:09:29.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.622 "dma_device_type": 2 00:09:29.622 } 00:09:29.622 ], 00:09:29.622 "driver_specific": {} 00:09:29.622 } 00:09:29.622 ] 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.622 "name": "Existed_Raid", 00:09:29.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.622 "strip_size_kb": 0, 00:09:29.622 "state": "configuring", 00:09:29.622 "raid_level": "raid1", 00:09:29.622 "superblock": false, 00:09:29.622 "num_base_bdevs": 3, 00:09:29.622 "num_base_bdevs_discovered": 1, 00:09:29.622 "num_base_bdevs_operational": 3, 00:09:29.622 "base_bdevs_list": [ 00:09:29.622 { 00:09:29.622 "name": "BaseBdev1", 00:09:29.622 "uuid": "7d49e4f1-68ec-4f38-ad44-7af10772aeed", 00:09:29.622 "is_configured": true, 00:09:29.622 "data_offset": 0, 00:09:29.622 "data_size": 65536 00:09:29.622 }, 00:09:29.622 { 00:09:29.622 "name": "BaseBdev2", 00:09:29.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.622 "is_configured": false, 00:09:29.622 "data_offset": 0, 00:09:29.622 "data_size": 0 00:09:29.622 }, 00:09:29.622 { 00:09:29.622 "name": "BaseBdev3", 00:09:29.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.622 "is_configured": false, 00:09:29.622 "data_offset": 0, 00:09:29.622 "data_size": 0 00:09:29.622 } 00:09:29.622 ] 00:09:29.622 }' 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.622 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.880 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:29.880 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.880 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.880 [2024-11-21 04:07:29.807736] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:29.880 [2024-11-21 04:07:29.807782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:29.880 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.880 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:29.880 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.880 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.880 [2024-11-21 04:07:29.819761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.880 [2024-11-21 04:07:29.822081] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:29.880 [2024-11-21 04:07:29.822178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:29.880 [2024-11-21 04:07:29.822241] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:29.880 [2024-11-21 04:07:29.822294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:29.880 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.880 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:29.880 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:29.880 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:29.880 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.880 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.880 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.880 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.880 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.880 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.880 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.880 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.880 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.880 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.881 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.881 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.881 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.881 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.140 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.140 "name": "Existed_Raid", 00:09:30.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.140 "strip_size_kb": 0, 00:09:30.140 "state": "configuring", 00:09:30.140 "raid_level": "raid1", 00:09:30.140 "superblock": false, 00:09:30.140 "num_base_bdevs": 3, 00:09:30.140 "num_base_bdevs_discovered": 1, 00:09:30.140 "num_base_bdevs_operational": 3, 00:09:30.140 "base_bdevs_list": [ 00:09:30.140 { 00:09:30.140 "name": "BaseBdev1", 00:09:30.140 "uuid": "7d49e4f1-68ec-4f38-ad44-7af10772aeed", 00:09:30.140 "is_configured": true, 00:09:30.140 "data_offset": 0, 00:09:30.140 "data_size": 65536 00:09:30.140 }, 00:09:30.140 { 00:09:30.140 "name": "BaseBdev2", 00:09:30.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.140 "is_configured": false, 00:09:30.140 "data_offset": 0, 00:09:30.140 "data_size": 0 00:09:30.140 }, 00:09:30.140 { 00:09:30.140 "name": "BaseBdev3", 00:09:30.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.140 "is_configured": false, 00:09:30.140 "data_offset": 0, 00:09:30.140 "data_size": 0 00:09:30.140 } 00:09:30.140 ] 00:09:30.140 }' 00:09:30.140 04:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.140 04:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.399 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:30.399 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.399 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.399 [2024-11-21 04:07:30.267851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:30.399 BaseBdev2 00:09:30.399 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.400 [ 00:09:30.400 { 00:09:30.400 "name": "BaseBdev2", 00:09:30.400 "aliases": [ 00:09:30.400 "41827f5d-e38f-43bd-a1ca-cb273e6c5e9b" 00:09:30.400 ], 00:09:30.400 "product_name": "Malloc disk", 00:09:30.400 "block_size": 512, 00:09:30.400 "num_blocks": 65536, 00:09:30.400 "uuid": "41827f5d-e38f-43bd-a1ca-cb273e6c5e9b", 00:09:30.400 "assigned_rate_limits": { 00:09:30.400 "rw_ios_per_sec": 0, 00:09:30.400 "rw_mbytes_per_sec": 0, 00:09:30.400 "r_mbytes_per_sec": 0, 00:09:30.400 "w_mbytes_per_sec": 0 00:09:30.400 }, 00:09:30.400 "claimed": true, 00:09:30.400 "claim_type": "exclusive_write", 00:09:30.400 "zoned": false, 00:09:30.400 "supported_io_types": { 00:09:30.400 "read": true, 00:09:30.400 "write": true, 00:09:30.400 "unmap": true, 00:09:30.400 "flush": true, 00:09:30.400 "reset": true, 00:09:30.400 "nvme_admin": false, 00:09:30.400 "nvme_io": false, 00:09:30.400 "nvme_io_md": false, 00:09:30.400 "write_zeroes": true, 00:09:30.400 "zcopy": true, 00:09:30.400 "get_zone_info": false, 00:09:30.400 "zone_management": false, 00:09:30.400 "zone_append": false, 00:09:30.400 "compare": false, 00:09:30.400 "compare_and_write": false, 00:09:30.400 "abort": true, 00:09:30.400 "seek_hole": false, 00:09:30.400 "seek_data": false, 00:09:30.400 "copy": true, 00:09:30.400 "nvme_iov_md": false 00:09:30.400 }, 00:09:30.400 "memory_domains": [ 00:09:30.400 { 00:09:30.400 "dma_device_id": "system", 00:09:30.400 "dma_device_type": 1 00:09:30.400 }, 00:09:30.400 { 00:09:30.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.400 "dma_device_type": 2 00:09:30.400 } 00:09:30.400 ], 00:09:30.400 "driver_specific": {} 00:09:30.400 } 00:09:30.400 ] 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.400 "name": "Existed_Raid", 00:09:30.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.400 "strip_size_kb": 0, 00:09:30.400 "state": "configuring", 00:09:30.400 "raid_level": "raid1", 00:09:30.400 "superblock": false, 00:09:30.400 "num_base_bdevs": 3, 00:09:30.400 "num_base_bdevs_discovered": 2, 00:09:30.400 "num_base_bdevs_operational": 3, 00:09:30.400 "base_bdevs_list": [ 00:09:30.400 { 00:09:30.400 "name": "BaseBdev1", 00:09:30.400 "uuid": "7d49e4f1-68ec-4f38-ad44-7af10772aeed", 00:09:30.400 "is_configured": true, 00:09:30.400 "data_offset": 0, 00:09:30.400 "data_size": 65536 00:09:30.400 }, 00:09:30.400 { 00:09:30.400 "name": "BaseBdev2", 00:09:30.400 "uuid": "41827f5d-e38f-43bd-a1ca-cb273e6c5e9b", 00:09:30.400 "is_configured": true, 00:09:30.400 "data_offset": 0, 00:09:30.400 "data_size": 65536 00:09:30.400 }, 00:09:30.400 { 00:09:30.400 "name": "BaseBdev3", 00:09:30.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.400 "is_configured": false, 00:09:30.400 "data_offset": 0, 00:09:30.400 "data_size": 0 00:09:30.400 } 00:09:30.400 ] 00:09:30.400 }' 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.400 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.969 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:30.969 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.969 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.969 [2024-11-21 04:07:30.770429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:30.969 [2024-11-21 04:07:30.770479] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:30.969 [2024-11-21 04:07:30.770505] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:30.969 [2024-11-21 04:07:30.770853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:30.969 [2024-11-21 04:07:30.771026] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:30.969 [2024-11-21 04:07:30.771037] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:30.969 [2024-11-21 04:07:30.771314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.969 BaseBdev3 00:09:30.969 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.969 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:30.969 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:30.969 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:30.969 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:30.969 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:30.969 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:30.969 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:30.969 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.969 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.969 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.969 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:30.969 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.969 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.969 [ 00:09:30.969 { 00:09:30.969 "name": "BaseBdev3", 00:09:30.969 "aliases": [ 00:09:30.969 "69b6b3de-9c8c-438a-b4aa-acf4c5a7c5da" 00:09:30.969 ], 00:09:30.969 "product_name": "Malloc disk", 00:09:30.969 "block_size": 512, 00:09:30.969 "num_blocks": 65536, 00:09:30.969 "uuid": "69b6b3de-9c8c-438a-b4aa-acf4c5a7c5da", 00:09:30.969 "assigned_rate_limits": { 00:09:30.969 "rw_ios_per_sec": 0, 00:09:30.969 "rw_mbytes_per_sec": 0, 00:09:30.969 "r_mbytes_per_sec": 0, 00:09:30.969 "w_mbytes_per_sec": 0 00:09:30.969 }, 00:09:30.969 "claimed": true, 00:09:30.969 "claim_type": "exclusive_write", 00:09:30.969 "zoned": false, 00:09:30.969 "supported_io_types": { 00:09:30.969 "read": true, 00:09:30.969 "write": true, 00:09:30.969 "unmap": true, 00:09:30.969 "flush": true, 00:09:30.969 "reset": true, 00:09:30.969 "nvme_admin": false, 00:09:30.969 "nvme_io": false, 00:09:30.969 "nvme_io_md": false, 00:09:30.969 "write_zeroes": true, 00:09:30.969 "zcopy": true, 00:09:30.969 "get_zone_info": false, 00:09:30.969 "zone_management": false, 00:09:30.969 "zone_append": false, 00:09:30.969 "compare": false, 00:09:30.969 "compare_and_write": false, 00:09:30.969 "abort": true, 00:09:30.969 "seek_hole": false, 00:09:30.969 "seek_data": false, 00:09:30.969 "copy": true, 00:09:30.969 "nvme_iov_md": false 00:09:30.969 }, 00:09:30.969 "memory_domains": [ 00:09:30.969 { 00:09:30.969 "dma_device_id": "system", 00:09:30.969 "dma_device_type": 1 00:09:30.969 }, 00:09:30.969 { 00:09:30.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.969 "dma_device_type": 2 00:09:30.969 } 00:09:30.969 ], 00:09:30.969 "driver_specific": {} 00:09:30.969 } 00:09:30.969 ] 00:09:30.969 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.970 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:30.970 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:30.970 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:30.970 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:30.970 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.970 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.970 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.970 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.970 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.970 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.970 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.970 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.970 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.970 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.970 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.970 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.970 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.970 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.970 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.970 "name": "Existed_Raid", 00:09:30.970 "uuid": "e3f04281-b1d9-4e1e-98e3-9e470245e719", 00:09:30.970 "strip_size_kb": 0, 00:09:30.970 "state": "online", 00:09:30.970 "raid_level": "raid1", 00:09:30.970 "superblock": false, 00:09:30.970 "num_base_bdevs": 3, 00:09:30.970 "num_base_bdevs_discovered": 3, 00:09:30.970 "num_base_bdevs_operational": 3, 00:09:30.970 "base_bdevs_list": [ 00:09:30.970 { 00:09:30.970 "name": "BaseBdev1", 00:09:30.970 "uuid": "7d49e4f1-68ec-4f38-ad44-7af10772aeed", 00:09:30.970 "is_configured": true, 00:09:30.970 "data_offset": 0, 00:09:30.970 "data_size": 65536 00:09:30.970 }, 00:09:30.970 { 00:09:30.970 "name": "BaseBdev2", 00:09:30.970 "uuid": "41827f5d-e38f-43bd-a1ca-cb273e6c5e9b", 00:09:30.970 "is_configured": true, 00:09:30.970 "data_offset": 0, 00:09:30.970 "data_size": 65536 00:09:30.970 }, 00:09:30.970 { 00:09:30.970 "name": "BaseBdev3", 00:09:30.970 "uuid": "69b6b3de-9c8c-438a-b4aa-acf4c5a7c5da", 00:09:30.970 "is_configured": true, 00:09:30.970 "data_offset": 0, 00:09:30.970 "data_size": 65536 00:09:30.970 } 00:09:30.970 ] 00:09:30.970 }' 00:09:30.970 04:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.970 04:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.538 [2024-11-21 04:07:31.281906] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:31.538 "name": "Existed_Raid", 00:09:31.538 "aliases": [ 00:09:31.538 "e3f04281-b1d9-4e1e-98e3-9e470245e719" 00:09:31.538 ], 00:09:31.538 "product_name": "Raid Volume", 00:09:31.538 "block_size": 512, 00:09:31.538 "num_blocks": 65536, 00:09:31.538 "uuid": "e3f04281-b1d9-4e1e-98e3-9e470245e719", 00:09:31.538 "assigned_rate_limits": { 00:09:31.538 "rw_ios_per_sec": 0, 00:09:31.538 "rw_mbytes_per_sec": 0, 00:09:31.538 "r_mbytes_per_sec": 0, 00:09:31.538 "w_mbytes_per_sec": 0 00:09:31.538 }, 00:09:31.538 "claimed": false, 00:09:31.538 "zoned": false, 00:09:31.538 "supported_io_types": { 00:09:31.538 "read": true, 00:09:31.538 "write": true, 00:09:31.538 "unmap": false, 00:09:31.538 "flush": false, 00:09:31.538 "reset": true, 00:09:31.538 "nvme_admin": false, 00:09:31.538 "nvme_io": false, 00:09:31.538 "nvme_io_md": false, 00:09:31.538 "write_zeroes": true, 00:09:31.538 "zcopy": false, 00:09:31.538 "get_zone_info": false, 00:09:31.538 "zone_management": false, 00:09:31.538 "zone_append": false, 00:09:31.538 "compare": false, 00:09:31.538 "compare_and_write": false, 00:09:31.538 "abort": false, 00:09:31.538 "seek_hole": false, 00:09:31.538 "seek_data": false, 00:09:31.538 "copy": false, 00:09:31.538 "nvme_iov_md": false 00:09:31.538 }, 00:09:31.538 "memory_domains": [ 00:09:31.538 { 00:09:31.538 "dma_device_id": "system", 00:09:31.538 "dma_device_type": 1 00:09:31.538 }, 00:09:31.538 { 00:09:31.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.538 "dma_device_type": 2 00:09:31.538 }, 00:09:31.538 { 00:09:31.538 "dma_device_id": "system", 00:09:31.538 "dma_device_type": 1 00:09:31.538 }, 00:09:31.538 { 00:09:31.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.538 "dma_device_type": 2 00:09:31.538 }, 00:09:31.538 { 00:09:31.538 "dma_device_id": "system", 00:09:31.538 "dma_device_type": 1 00:09:31.538 }, 00:09:31.538 { 00:09:31.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.538 "dma_device_type": 2 00:09:31.538 } 00:09:31.538 ], 00:09:31.538 "driver_specific": { 00:09:31.538 "raid": { 00:09:31.538 "uuid": "e3f04281-b1d9-4e1e-98e3-9e470245e719", 00:09:31.538 "strip_size_kb": 0, 00:09:31.538 "state": "online", 00:09:31.538 "raid_level": "raid1", 00:09:31.538 "superblock": false, 00:09:31.538 "num_base_bdevs": 3, 00:09:31.538 "num_base_bdevs_discovered": 3, 00:09:31.538 "num_base_bdevs_operational": 3, 00:09:31.538 "base_bdevs_list": [ 00:09:31.538 { 00:09:31.538 "name": "BaseBdev1", 00:09:31.538 "uuid": "7d49e4f1-68ec-4f38-ad44-7af10772aeed", 00:09:31.538 "is_configured": true, 00:09:31.538 "data_offset": 0, 00:09:31.538 "data_size": 65536 00:09:31.538 }, 00:09:31.538 { 00:09:31.538 "name": "BaseBdev2", 00:09:31.538 "uuid": "41827f5d-e38f-43bd-a1ca-cb273e6c5e9b", 00:09:31.538 "is_configured": true, 00:09:31.538 "data_offset": 0, 00:09:31.538 "data_size": 65536 00:09:31.538 }, 00:09:31.538 { 00:09:31.538 "name": "BaseBdev3", 00:09:31.538 "uuid": "69b6b3de-9c8c-438a-b4aa-acf4c5a7c5da", 00:09:31.538 "is_configured": true, 00:09:31.538 "data_offset": 0, 00:09:31.538 "data_size": 65536 00:09:31.538 } 00:09:31.538 ] 00:09:31.538 } 00:09:31.538 } 00:09:31.538 }' 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:31.538 BaseBdev2 00:09:31.538 BaseBdev3' 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.538 04:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.797 [2024-11-21 04:07:31.529263] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.797 "name": "Existed_Raid", 00:09:31.797 "uuid": "e3f04281-b1d9-4e1e-98e3-9e470245e719", 00:09:31.797 "strip_size_kb": 0, 00:09:31.797 "state": "online", 00:09:31.797 "raid_level": "raid1", 00:09:31.797 "superblock": false, 00:09:31.797 "num_base_bdevs": 3, 00:09:31.797 "num_base_bdevs_discovered": 2, 00:09:31.797 "num_base_bdevs_operational": 2, 00:09:31.797 "base_bdevs_list": [ 00:09:31.797 { 00:09:31.797 "name": null, 00:09:31.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.797 "is_configured": false, 00:09:31.797 "data_offset": 0, 00:09:31.797 "data_size": 65536 00:09:31.797 }, 00:09:31.797 { 00:09:31.797 "name": "BaseBdev2", 00:09:31.797 "uuid": "41827f5d-e38f-43bd-a1ca-cb273e6c5e9b", 00:09:31.797 "is_configured": true, 00:09:31.797 "data_offset": 0, 00:09:31.797 "data_size": 65536 00:09:31.797 }, 00:09:31.797 { 00:09:31.797 "name": "BaseBdev3", 00:09:31.797 "uuid": "69b6b3de-9c8c-438a-b4aa-acf4c5a7c5da", 00:09:31.797 "is_configured": true, 00:09:31.797 "data_offset": 0, 00:09:31.797 "data_size": 65536 00:09:31.797 } 00:09:31.797 ] 00:09:31.797 }' 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.797 04:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.367 [2024-11-21 04:07:32.089405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.367 [2024-11-21 04:07:32.170490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:32.367 [2024-11-21 04:07:32.170597] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:32.367 [2024-11-21 04:07:32.192081] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.367 [2024-11-21 04:07:32.192246] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:32.367 [2024-11-21 04:07:32.192345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.367 BaseBdev2 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.367 [ 00:09:32.367 { 00:09:32.367 "name": "BaseBdev2", 00:09:32.367 "aliases": [ 00:09:32.367 "5a19dfd1-0f59-49a8-bcbb-172152b8ba99" 00:09:32.367 ], 00:09:32.367 "product_name": "Malloc disk", 00:09:32.367 "block_size": 512, 00:09:32.367 "num_blocks": 65536, 00:09:32.367 "uuid": "5a19dfd1-0f59-49a8-bcbb-172152b8ba99", 00:09:32.367 "assigned_rate_limits": { 00:09:32.367 "rw_ios_per_sec": 0, 00:09:32.367 "rw_mbytes_per_sec": 0, 00:09:32.367 "r_mbytes_per_sec": 0, 00:09:32.367 "w_mbytes_per_sec": 0 00:09:32.367 }, 00:09:32.367 "claimed": false, 00:09:32.367 "zoned": false, 00:09:32.367 "supported_io_types": { 00:09:32.367 "read": true, 00:09:32.367 "write": true, 00:09:32.367 "unmap": true, 00:09:32.367 "flush": true, 00:09:32.367 "reset": true, 00:09:32.367 "nvme_admin": false, 00:09:32.367 "nvme_io": false, 00:09:32.367 "nvme_io_md": false, 00:09:32.367 "write_zeroes": true, 00:09:32.367 "zcopy": true, 00:09:32.367 "get_zone_info": false, 00:09:32.367 "zone_management": false, 00:09:32.367 "zone_append": false, 00:09:32.367 "compare": false, 00:09:32.367 "compare_and_write": false, 00:09:32.367 "abort": true, 00:09:32.367 "seek_hole": false, 00:09:32.367 "seek_data": false, 00:09:32.367 "copy": true, 00:09:32.367 "nvme_iov_md": false 00:09:32.367 }, 00:09:32.367 "memory_domains": [ 00:09:32.367 { 00:09:32.367 "dma_device_id": "system", 00:09:32.367 "dma_device_type": 1 00:09:32.367 }, 00:09:32.367 { 00:09:32.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.367 "dma_device_type": 2 00:09:32.367 } 00:09:32.367 ], 00:09:32.367 "driver_specific": {} 00:09:32.367 } 00:09:32.367 ] 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.367 BaseBdev3 00:09:32.367 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.368 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:32.368 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:32.368 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.368 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:32.368 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.368 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.368 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:32.368 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.368 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.368 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.368 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:32.368 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.368 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.627 [ 00:09:32.627 { 00:09:32.627 "name": "BaseBdev3", 00:09:32.627 "aliases": [ 00:09:32.627 "5fd3b558-7ee2-4929-a10a-128d35249989" 00:09:32.627 ], 00:09:32.627 "product_name": "Malloc disk", 00:09:32.627 "block_size": 512, 00:09:32.627 "num_blocks": 65536, 00:09:32.627 "uuid": "5fd3b558-7ee2-4929-a10a-128d35249989", 00:09:32.627 "assigned_rate_limits": { 00:09:32.627 "rw_ios_per_sec": 0, 00:09:32.627 "rw_mbytes_per_sec": 0, 00:09:32.627 "r_mbytes_per_sec": 0, 00:09:32.627 "w_mbytes_per_sec": 0 00:09:32.627 }, 00:09:32.627 "claimed": false, 00:09:32.627 "zoned": false, 00:09:32.627 "supported_io_types": { 00:09:32.627 "read": true, 00:09:32.627 "write": true, 00:09:32.627 "unmap": true, 00:09:32.627 "flush": true, 00:09:32.627 "reset": true, 00:09:32.627 "nvme_admin": false, 00:09:32.627 "nvme_io": false, 00:09:32.627 "nvme_io_md": false, 00:09:32.627 "write_zeroes": true, 00:09:32.627 "zcopy": true, 00:09:32.627 "get_zone_info": false, 00:09:32.627 "zone_management": false, 00:09:32.627 "zone_append": false, 00:09:32.627 "compare": false, 00:09:32.627 "compare_and_write": false, 00:09:32.627 "abort": true, 00:09:32.627 "seek_hole": false, 00:09:32.627 "seek_data": false, 00:09:32.627 "copy": true, 00:09:32.627 "nvme_iov_md": false 00:09:32.627 }, 00:09:32.627 "memory_domains": [ 00:09:32.627 { 00:09:32.627 "dma_device_id": "system", 00:09:32.627 "dma_device_type": 1 00:09:32.627 }, 00:09:32.627 { 00:09:32.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.627 "dma_device_type": 2 00:09:32.627 } 00:09:32.627 ], 00:09:32.627 "driver_specific": {} 00:09:32.627 } 00:09:32.627 ] 00:09:32.627 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.627 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:32.627 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:32.627 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:32.627 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:32.627 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.627 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.627 [2024-11-21 04:07:32.367814] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.627 [2024-11-21 04:07:32.367924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.627 [2024-11-21 04:07:32.367977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.627 [2024-11-21 04:07:32.370301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:32.627 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.627 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:32.627 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.627 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.627 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.627 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.627 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.627 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.627 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.627 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.627 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.627 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.627 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.627 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.627 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.627 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.627 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.627 "name": "Existed_Raid", 00:09:32.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.627 "strip_size_kb": 0, 00:09:32.627 "state": "configuring", 00:09:32.627 "raid_level": "raid1", 00:09:32.627 "superblock": false, 00:09:32.627 "num_base_bdevs": 3, 00:09:32.627 "num_base_bdevs_discovered": 2, 00:09:32.627 "num_base_bdevs_operational": 3, 00:09:32.627 "base_bdevs_list": [ 00:09:32.627 { 00:09:32.627 "name": "BaseBdev1", 00:09:32.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.627 "is_configured": false, 00:09:32.627 "data_offset": 0, 00:09:32.627 "data_size": 0 00:09:32.627 }, 00:09:32.627 { 00:09:32.627 "name": "BaseBdev2", 00:09:32.627 "uuid": "5a19dfd1-0f59-49a8-bcbb-172152b8ba99", 00:09:32.627 "is_configured": true, 00:09:32.627 "data_offset": 0, 00:09:32.627 "data_size": 65536 00:09:32.627 }, 00:09:32.627 { 00:09:32.627 "name": "BaseBdev3", 00:09:32.627 "uuid": "5fd3b558-7ee2-4929-a10a-128d35249989", 00:09:32.627 "is_configured": true, 00:09:32.627 "data_offset": 0, 00:09:32.627 "data_size": 65536 00:09:32.627 } 00:09:32.627 ] 00:09:32.627 }' 00:09:32.627 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.627 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.898 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:32.898 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.898 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.898 [2024-11-21 04:07:32.803068] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:32.898 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.898 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:32.898 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.898 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.898 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.898 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.898 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.898 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.898 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.898 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.898 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.898 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.898 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.898 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.898 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.898 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.899 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.899 "name": "Existed_Raid", 00:09:32.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.899 "strip_size_kb": 0, 00:09:32.899 "state": "configuring", 00:09:32.899 "raid_level": "raid1", 00:09:32.899 "superblock": false, 00:09:32.899 "num_base_bdevs": 3, 00:09:32.899 "num_base_bdevs_discovered": 1, 00:09:32.899 "num_base_bdevs_operational": 3, 00:09:32.899 "base_bdevs_list": [ 00:09:32.899 { 00:09:32.899 "name": "BaseBdev1", 00:09:32.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.899 "is_configured": false, 00:09:32.899 "data_offset": 0, 00:09:32.899 "data_size": 0 00:09:32.899 }, 00:09:32.899 { 00:09:32.899 "name": null, 00:09:32.899 "uuid": "5a19dfd1-0f59-49a8-bcbb-172152b8ba99", 00:09:32.899 "is_configured": false, 00:09:32.899 "data_offset": 0, 00:09:32.899 "data_size": 65536 00:09:32.899 }, 00:09:32.899 { 00:09:32.899 "name": "BaseBdev3", 00:09:32.899 "uuid": "5fd3b558-7ee2-4929-a10a-128d35249989", 00:09:32.899 "is_configured": true, 00:09:32.899 "data_offset": 0, 00:09:32.899 "data_size": 65536 00:09:32.899 } 00:09:32.899 ] 00:09:32.899 }' 00:09:32.899 04:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.899 04:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.467 [2024-11-21 04:07:33.307693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.467 BaseBdev1 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.467 [ 00:09:33.467 { 00:09:33.467 "name": "BaseBdev1", 00:09:33.467 "aliases": [ 00:09:33.467 "f1480a41-2835-4bb2-a4c8-de3dd8beca49" 00:09:33.467 ], 00:09:33.467 "product_name": "Malloc disk", 00:09:33.467 "block_size": 512, 00:09:33.467 "num_blocks": 65536, 00:09:33.467 "uuid": "f1480a41-2835-4bb2-a4c8-de3dd8beca49", 00:09:33.467 "assigned_rate_limits": { 00:09:33.467 "rw_ios_per_sec": 0, 00:09:33.467 "rw_mbytes_per_sec": 0, 00:09:33.467 "r_mbytes_per_sec": 0, 00:09:33.467 "w_mbytes_per_sec": 0 00:09:33.467 }, 00:09:33.467 "claimed": true, 00:09:33.467 "claim_type": "exclusive_write", 00:09:33.467 "zoned": false, 00:09:33.467 "supported_io_types": { 00:09:33.467 "read": true, 00:09:33.467 "write": true, 00:09:33.467 "unmap": true, 00:09:33.467 "flush": true, 00:09:33.467 "reset": true, 00:09:33.467 "nvme_admin": false, 00:09:33.467 "nvme_io": false, 00:09:33.467 "nvme_io_md": false, 00:09:33.467 "write_zeroes": true, 00:09:33.467 "zcopy": true, 00:09:33.467 "get_zone_info": false, 00:09:33.467 "zone_management": false, 00:09:33.467 "zone_append": false, 00:09:33.467 "compare": false, 00:09:33.467 "compare_and_write": false, 00:09:33.467 "abort": true, 00:09:33.467 "seek_hole": false, 00:09:33.467 "seek_data": false, 00:09:33.467 "copy": true, 00:09:33.467 "nvme_iov_md": false 00:09:33.467 }, 00:09:33.467 "memory_domains": [ 00:09:33.467 { 00:09:33.467 "dma_device_id": "system", 00:09:33.467 "dma_device_type": 1 00:09:33.467 }, 00:09:33.467 { 00:09:33.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.467 "dma_device_type": 2 00:09:33.467 } 00:09:33.467 ], 00:09:33.467 "driver_specific": {} 00:09:33.467 } 00:09:33.467 ] 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.467 "name": "Existed_Raid", 00:09:33.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.467 "strip_size_kb": 0, 00:09:33.467 "state": "configuring", 00:09:33.467 "raid_level": "raid1", 00:09:33.467 "superblock": false, 00:09:33.467 "num_base_bdevs": 3, 00:09:33.467 "num_base_bdevs_discovered": 2, 00:09:33.467 "num_base_bdevs_operational": 3, 00:09:33.467 "base_bdevs_list": [ 00:09:33.467 { 00:09:33.467 "name": "BaseBdev1", 00:09:33.467 "uuid": "f1480a41-2835-4bb2-a4c8-de3dd8beca49", 00:09:33.467 "is_configured": true, 00:09:33.467 "data_offset": 0, 00:09:33.467 "data_size": 65536 00:09:33.467 }, 00:09:33.467 { 00:09:33.467 "name": null, 00:09:33.467 "uuid": "5a19dfd1-0f59-49a8-bcbb-172152b8ba99", 00:09:33.467 "is_configured": false, 00:09:33.467 "data_offset": 0, 00:09:33.467 "data_size": 65536 00:09:33.467 }, 00:09:33.467 { 00:09:33.467 "name": "BaseBdev3", 00:09:33.467 "uuid": "5fd3b558-7ee2-4929-a10a-128d35249989", 00:09:33.467 "is_configured": true, 00:09:33.467 "data_offset": 0, 00:09:33.467 "data_size": 65536 00:09:33.467 } 00:09:33.467 ] 00:09:33.467 }' 00:09:33.467 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.468 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.034 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.034 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.034 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.034 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:34.034 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.034 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:34.034 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:34.034 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.034 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.034 [2024-11-21 04:07:33.878820] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:34.034 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.034 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.034 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.034 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.034 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.034 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.034 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.034 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.034 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.034 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.034 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.034 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.034 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.034 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.034 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.034 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.034 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.034 "name": "Existed_Raid", 00:09:34.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.034 "strip_size_kb": 0, 00:09:34.034 "state": "configuring", 00:09:34.034 "raid_level": "raid1", 00:09:34.034 "superblock": false, 00:09:34.034 "num_base_bdevs": 3, 00:09:34.034 "num_base_bdevs_discovered": 1, 00:09:34.034 "num_base_bdevs_operational": 3, 00:09:34.034 "base_bdevs_list": [ 00:09:34.034 { 00:09:34.035 "name": "BaseBdev1", 00:09:34.035 "uuid": "f1480a41-2835-4bb2-a4c8-de3dd8beca49", 00:09:34.035 "is_configured": true, 00:09:34.035 "data_offset": 0, 00:09:34.035 "data_size": 65536 00:09:34.035 }, 00:09:34.035 { 00:09:34.035 "name": null, 00:09:34.035 "uuid": "5a19dfd1-0f59-49a8-bcbb-172152b8ba99", 00:09:34.035 "is_configured": false, 00:09:34.035 "data_offset": 0, 00:09:34.035 "data_size": 65536 00:09:34.035 }, 00:09:34.035 { 00:09:34.035 "name": null, 00:09:34.035 "uuid": "5fd3b558-7ee2-4929-a10a-128d35249989", 00:09:34.035 "is_configured": false, 00:09:34.035 "data_offset": 0, 00:09:34.035 "data_size": 65536 00:09:34.035 } 00:09:34.035 ] 00:09:34.035 }' 00:09:34.035 04:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.035 04:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.603 [2024-11-21 04:07:34.386101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.603 "name": "Existed_Raid", 00:09:34.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.603 "strip_size_kb": 0, 00:09:34.603 "state": "configuring", 00:09:34.603 "raid_level": "raid1", 00:09:34.603 "superblock": false, 00:09:34.603 "num_base_bdevs": 3, 00:09:34.603 "num_base_bdevs_discovered": 2, 00:09:34.603 "num_base_bdevs_operational": 3, 00:09:34.603 "base_bdevs_list": [ 00:09:34.603 { 00:09:34.603 "name": "BaseBdev1", 00:09:34.603 "uuid": "f1480a41-2835-4bb2-a4c8-de3dd8beca49", 00:09:34.603 "is_configured": true, 00:09:34.603 "data_offset": 0, 00:09:34.603 "data_size": 65536 00:09:34.603 }, 00:09:34.603 { 00:09:34.603 "name": null, 00:09:34.603 "uuid": "5a19dfd1-0f59-49a8-bcbb-172152b8ba99", 00:09:34.603 "is_configured": false, 00:09:34.603 "data_offset": 0, 00:09:34.603 "data_size": 65536 00:09:34.603 }, 00:09:34.603 { 00:09:34.603 "name": "BaseBdev3", 00:09:34.603 "uuid": "5fd3b558-7ee2-4929-a10a-128d35249989", 00:09:34.603 "is_configured": true, 00:09:34.603 "data_offset": 0, 00:09:34.603 "data_size": 65536 00:09:34.603 } 00:09:34.603 ] 00:09:34.603 }' 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.603 04:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.864 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:34.864 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.864 04:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.864 04:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.864 04:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.864 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:34.864 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:34.864 04:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.864 04:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.864 [2024-11-21 04:07:34.781443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:34.864 04:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.864 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.864 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.864 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.864 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.864 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.864 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.864 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.864 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.864 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.864 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.864 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.864 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.864 04:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.864 04:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.864 04:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.122 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.122 "name": "Existed_Raid", 00:09:35.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.122 "strip_size_kb": 0, 00:09:35.122 "state": "configuring", 00:09:35.122 "raid_level": "raid1", 00:09:35.122 "superblock": false, 00:09:35.122 "num_base_bdevs": 3, 00:09:35.122 "num_base_bdevs_discovered": 1, 00:09:35.122 "num_base_bdevs_operational": 3, 00:09:35.122 "base_bdevs_list": [ 00:09:35.122 { 00:09:35.123 "name": null, 00:09:35.123 "uuid": "f1480a41-2835-4bb2-a4c8-de3dd8beca49", 00:09:35.123 "is_configured": false, 00:09:35.123 "data_offset": 0, 00:09:35.123 "data_size": 65536 00:09:35.123 }, 00:09:35.123 { 00:09:35.123 "name": null, 00:09:35.123 "uuid": "5a19dfd1-0f59-49a8-bcbb-172152b8ba99", 00:09:35.123 "is_configured": false, 00:09:35.123 "data_offset": 0, 00:09:35.123 "data_size": 65536 00:09:35.123 }, 00:09:35.123 { 00:09:35.123 "name": "BaseBdev3", 00:09:35.123 "uuid": "5fd3b558-7ee2-4929-a10a-128d35249989", 00:09:35.123 "is_configured": true, 00:09:35.123 "data_offset": 0, 00:09:35.123 "data_size": 65536 00:09:35.123 } 00:09:35.123 ] 00:09:35.123 }' 00:09:35.123 04:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.123 04:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.381 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.381 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:35.381 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.381 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.381 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.381 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:35.381 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:35.381 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.381 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.381 [2024-11-21 04:07:35.340797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:35.381 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.381 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:35.381 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.381 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.381 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.381 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.381 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.381 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.381 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.381 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.381 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.381 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.640 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.640 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.640 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.640 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.640 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.640 "name": "Existed_Raid", 00:09:35.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.640 "strip_size_kb": 0, 00:09:35.640 "state": "configuring", 00:09:35.640 "raid_level": "raid1", 00:09:35.640 "superblock": false, 00:09:35.640 "num_base_bdevs": 3, 00:09:35.640 "num_base_bdevs_discovered": 2, 00:09:35.640 "num_base_bdevs_operational": 3, 00:09:35.640 "base_bdevs_list": [ 00:09:35.640 { 00:09:35.640 "name": null, 00:09:35.640 "uuid": "f1480a41-2835-4bb2-a4c8-de3dd8beca49", 00:09:35.640 "is_configured": false, 00:09:35.640 "data_offset": 0, 00:09:35.640 "data_size": 65536 00:09:35.640 }, 00:09:35.640 { 00:09:35.640 "name": "BaseBdev2", 00:09:35.640 "uuid": "5a19dfd1-0f59-49a8-bcbb-172152b8ba99", 00:09:35.640 "is_configured": true, 00:09:35.640 "data_offset": 0, 00:09:35.640 "data_size": 65536 00:09:35.640 }, 00:09:35.640 { 00:09:35.640 "name": "BaseBdev3", 00:09:35.640 "uuid": "5fd3b558-7ee2-4929-a10a-128d35249989", 00:09:35.640 "is_configured": true, 00:09:35.640 "data_offset": 0, 00:09:35.640 "data_size": 65536 00:09:35.641 } 00:09:35.641 ] 00:09:35.641 }' 00:09:35.641 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.641 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.899 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.899 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:35.899 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.899 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.899 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.899 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:35.899 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.899 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:35.899 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.899 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.899 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f1480a41-2835-4bb2-a4c8-de3dd8beca49 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.159 [2024-11-21 04:07:35.909120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:36.159 [2024-11-21 04:07:35.909174] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:36.159 [2024-11-21 04:07:35.909183] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:36.159 [2024-11-21 04:07:35.909550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:09:36.159 [2024-11-21 04:07:35.909705] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:36.159 [2024-11-21 04:07:35.909756] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:36.159 [2024-11-21 04:07:35.909969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.159 NewBaseBdev 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.159 [ 00:09:36.159 { 00:09:36.159 "name": "NewBaseBdev", 00:09:36.159 "aliases": [ 00:09:36.159 "f1480a41-2835-4bb2-a4c8-de3dd8beca49" 00:09:36.159 ], 00:09:36.159 "product_name": "Malloc disk", 00:09:36.159 "block_size": 512, 00:09:36.159 "num_blocks": 65536, 00:09:36.159 "uuid": "f1480a41-2835-4bb2-a4c8-de3dd8beca49", 00:09:36.159 "assigned_rate_limits": { 00:09:36.159 "rw_ios_per_sec": 0, 00:09:36.159 "rw_mbytes_per_sec": 0, 00:09:36.159 "r_mbytes_per_sec": 0, 00:09:36.159 "w_mbytes_per_sec": 0 00:09:36.159 }, 00:09:36.159 "claimed": true, 00:09:36.159 "claim_type": "exclusive_write", 00:09:36.159 "zoned": false, 00:09:36.159 "supported_io_types": { 00:09:36.159 "read": true, 00:09:36.159 "write": true, 00:09:36.159 "unmap": true, 00:09:36.159 "flush": true, 00:09:36.159 "reset": true, 00:09:36.159 "nvme_admin": false, 00:09:36.159 "nvme_io": false, 00:09:36.159 "nvme_io_md": false, 00:09:36.159 "write_zeroes": true, 00:09:36.159 "zcopy": true, 00:09:36.159 "get_zone_info": false, 00:09:36.159 "zone_management": false, 00:09:36.159 "zone_append": false, 00:09:36.159 "compare": false, 00:09:36.159 "compare_and_write": false, 00:09:36.159 "abort": true, 00:09:36.159 "seek_hole": false, 00:09:36.159 "seek_data": false, 00:09:36.159 "copy": true, 00:09:36.159 "nvme_iov_md": false 00:09:36.159 }, 00:09:36.159 "memory_domains": [ 00:09:36.159 { 00:09:36.159 "dma_device_id": "system", 00:09:36.159 "dma_device_type": 1 00:09:36.159 }, 00:09:36.159 { 00:09:36.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.159 "dma_device_type": 2 00:09:36.159 } 00:09:36.159 ], 00:09:36.159 "driver_specific": {} 00:09:36.159 } 00:09:36.159 ] 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.159 04:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.159 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.159 "name": "Existed_Raid", 00:09:36.159 "uuid": "29d58cb4-cd9b-42a1-b137-a74435adb185", 00:09:36.159 "strip_size_kb": 0, 00:09:36.159 "state": "online", 00:09:36.159 "raid_level": "raid1", 00:09:36.159 "superblock": false, 00:09:36.159 "num_base_bdevs": 3, 00:09:36.159 "num_base_bdevs_discovered": 3, 00:09:36.159 "num_base_bdevs_operational": 3, 00:09:36.159 "base_bdevs_list": [ 00:09:36.159 { 00:09:36.159 "name": "NewBaseBdev", 00:09:36.159 "uuid": "f1480a41-2835-4bb2-a4c8-de3dd8beca49", 00:09:36.159 "is_configured": true, 00:09:36.159 "data_offset": 0, 00:09:36.159 "data_size": 65536 00:09:36.159 }, 00:09:36.159 { 00:09:36.159 "name": "BaseBdev2", 00:09:36.159 "uuid": "5a19dfd1-0f59-49a8-bcbb-172152b8ba99", 00:09:36.159 "is_configured": true, 00:09:36.159 "data_offset": 0, 00:09:36.159 "data_size": 65536 00:09:36.159 }, 00:09:36.159 { 00:09:36.159 "name": "BaseBdev3", 00:09:36.159 "uuid": "5fd3b558-7ee2-4929-a10a-128d35249989", 00:09:36.159 "is_configured": true, 00:09:36.159 "data_offset": 0, 00:09:36.159 "data_size": 65536 00:09:36.159 } 00:09:36.159 ] 00:09:36.159 }' 00:09:36.159 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.159 04:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.729 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:36.729 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:36.729 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:36.729 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:36.729 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:36.729 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:36.729 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:36.729 04:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.729 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:36.729 04:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.729 [2024-11-21 04:07:36.416648] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.729 04:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.729 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:36.729 "name": "Existed_Raid", 00:09:36.729 "aliases": [ 00:09:36.729 "29d58cb4-cd9b-42a1-b137-a74435adb185" 00:09:36.729 ], 00:09:36.729 "product_name": "Raid Volume", 00:09:36.729 "block_size": 512, 00:09:36.729 "num_blocks": 65536, 00:09:36.729 "uuid": "29d58cb4-cd9b-42a1-b137-a74435adb185", 00:09:36.729 "assigned_rate_limits": { 00:09:36.729 "rw_ios_per_sec": 0, 00:09:36.729 "rw_mbytes_per_sec": 0, 00:09:36.729 "r_mbytes_per_sec": 0, 00:09:36.729 "w_mbytes_per_sec": 0 00:09:36.729 }, 00:09:36.729 "claimed": false, 00:09:36.729 "zoned": false, 00:09:36.729 "supported_io_types": { 00:09:36.729 "read": true, 00:09:36.729 "write": true, 00:09:36.729 "unmap": false, 00:09:36.729 "flush": false, 00:09:36.729 "reset": true, 00:09:36.729 "nvme_admin": false, 00:09:36.729 "nvme_io": false, 00:09:36.729 "nvme_io_md": false, 00:09:36.729 "write_zeroes": true, 00:09:36.729 "zcopy": false, 00:09:36.729 "get_zone_info": false, 00:09:36.729 "zone_management": false, 00:09:36.729 "zone_append": false, 00:09:36.729 "compare": false, 00:09:36.729 "compare_and_write": false, 00:09:36.729 "abort": false, 00:09:36.729 "seek_hole": false, 00:09:36.729 "seek_data": false, 00:09:36.729 "copy": false, 00:09:36.729 "nvme_iov_md": false 00:09:36.729 }, 00:09:36.729 "memory_domains": [ 00:09:36.729 { 00:09:36.729 "dma_device_id": "system", 00:09:36.729 "dma_device_type": 1 00:09:36.729 }, 00:09:36.729 { 00:09:36.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.729 "dma_device_type": 2 00:09:36.729 }, 00:09:36.729 { 00:09:36.729 "dma_device_id": "system", 00:09:36.729 "dma_device_type": 1 00:09:36.729 }, 00:09:36.729 { 00:09:36.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.730 "dma_device_type": 2 00:09:36.730 }, 00:09:36.730 { 00:09:36.730 "dma_device_id": "system", 00:09:36.730 "dma_device_type": 1 00:09:36.730 }, 00:09:36.730 { 00:09:36.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.730 "dma_device_type": 2 00:09:36.730 } 00:09:36.730 ], 00:09:36.730 "driver_specific": { 00:09:36.730 "raid": { 00:09:36.730 "uuid": "29d58cb4-cd9b-42a1-b137-a74435adb185", 00:09:36.730 "strip_size_kb": 0, 00:09:36.730 "state": "online", 00:09:36.730 "raid_level": "raid1", 00:09:36.730 "superblock": false, 00:09:36.730 "num_base_bdevs": 3, 00:09:36.730 "num_base_bdevs_discovered": 3, 00:09:36.730 "num_base_bdevs_operational": 3, 00:09:36.730 "base_bdevs_list": [ 00:09:36.730 { 00:09:36.730 "name": "NewBaseBdev", 00:09:36.730 "uuid": "f1480a41-2835-4bb2-a4c8-de3dd8beca49", 00:09:36.730 "is_configured": true, 00:09:36.730 "data_offset": 0, 00:09:36.730 "data_size": 65536 00:09:36.730 }, 00:09:36.730 { 00:09:36.730 "name": "BaseBdev2", 00:09:36.730 "uuid": "5a19dfd1-0f59-49a8-bcbb-172152b8ba99", 00:09:36.730 "is_configured": true, 00:09:36.730 "data_offset": 0, 00:09:36.730 "data_size": 65536 00:09:36.730 }, 00:09:36.730 { 00:09:36.730 "name": "BaseBdev3", 00:09:36.730 "uuid": "5fd3b558-7ee2-4929-a10a-128d35249989", 00:09:36.730 "is_configured": true, 00:09:36.730 "data_offset": 0, 00:09:36.730 "data_size": 65536 00:09:36.730 } 00:09:36.730 ] 00:09:36.730 } 00:09:36.730 } 00:09:36.730 }' 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:36.730 BaseBdev2 00:09:36.730 BaseBdev3' 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.730 [2024-11-21 04:07:36.691825] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:36.730 [2024-11-21 04:07:36.691856] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:36.730 [2024-11-21 04:07:36.691934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.730 [2024-11-21 04:07:36.692247] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.730 [2024-11-21 04:07:36.692259] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78469 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 78469 ']' 00:09:36.730 04:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 78469 00:09:36.989 04:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:36.989 04:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.989 04:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78469 00:09:36.989 04:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.989 04:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.989 04:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78469' 00:09:36.989 killing process with pid 78469 00:09:36.989 04:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 78469 00:09:36.989 [2024-11-21 04:07:36.742247] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:36.989 04:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 78469 00:09:36.989 [2024-11-21 04:07:36.803145] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:37.249 04:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:37.249 00:09:37.249 real 0m9.165s 00:09:37.249 user 0m15.303s 00:09:37.249 sys 0m2.043s 00:09:37.249 04:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.249 ************************************ 00:09:37.249 END TEST raid_state_function_test 00:09:37.249 ************************************ 00:09:37.249 04:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.249 04:07:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:37.249 04:07:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:37.249 04:07:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.249 04:07:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:37.249 ************************************ 00:09:37.249 START TEST raid_state_function_test_sb 00:09:37.249 ************************************ 00:09:37.249 04:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:37.249 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:37.249 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:37.249 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:37.249 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:37.249 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:37.249 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.249 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:37.249 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:37.249 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.249 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:37.249 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:37.249 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.249 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:37.249 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:37.249 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.249 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:37.249 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:37.508 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:37.508 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:37.508 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:37.508 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:37.508 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:37.508 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:37.508 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:37.508 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:37.508 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=79075 00:09:37.508 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:37.508 Process raid pid: 79075 00:09:37.508 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79075' 00:09:37.508 04:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 79075 00:09:37.508 04:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 79075 ']' 00:09:37.508 04:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.508 04:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.508 04:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.508 04:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.508 04:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.508 [2024-11-21 04:07:37.313863] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:09:37.508 [2024-11-21 04:07:37.314084] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.508 [2024-11-21 04:07:37.470022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.767 [2024-11-21 04:07:37.511474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.767 [2024-11-21 04:07:37.590535] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.767 [2024-11-21 04:07:37.590570] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.333 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.333 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:38.333 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:38.333 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.333 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.333 [2024-11-21 04:07:38.159178] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:38.333 [2024-11-21 04:07:38.159375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:38.333 [2024-11-21 04:07:38.159397] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:38.333 [2024-11-21 04:07:38.159409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:38.333 [2024-11-21 04:07:38.159416] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:38.333 [2024-11-21 04:07:38.159428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:38.333 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.333 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:38.333 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.333 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.333 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.333 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.333 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.333 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.333 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.333 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.334 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.334 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.334 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.334 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.334 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.334 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.334 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.334 "name": "Existed_Raid", 00:09:38.334 "uuid": "d135c793-042d-4908-a195-ef74cd95ead7", 00:09:38.334 "strip_size_kb": 0, 00:09:38.334 "state": "configuring", 00:09:38.334 "raid_level": "raid1", 00:09:38.334 "superblock": true, 00:09:38.334 "num_base_bdevs": 3, 00:09:38.334 "num_base_bdevs_discovered": 0, 00:09:38.334 "num_base_bdevs_operational": 3, 00:09:38.334 "base_bdevs_list": [ 00:09:38.334 { 00:09:38.334 "name": "BaseBdev1", 00:09:38.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.334 "is_configured": false, 00:09:38.334 "data_offset": 0, 00:09:38.334 "data_size": 0 00:09:38.334 }, 00:09:38.334 { 00:09:38.334 "name": "BaseBdev2", 00:09:38.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.334 "is_configured": false, 00:09:38.334 "data_offset": 0, 00:09:38.334 "data_size": 0 00:09:38.334 }, 00:09:38.334 { 00:09:38.334 "name": "BaseBdev3", 00:09:38.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.334 "is_configured": false, 00:09:38.334 "data_offset": 0, 00:09:38.334 "data_size": 0 00:09:38.334 } 00:09:38.334 ] 00:09:38.334 }' 00:09:38.334 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.334 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.593 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:38.593 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.593 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.593 [2024-11-21 04:07:38.554455] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:38.593 [2024-11-21 04:07:38.554579] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:38.593 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.593 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:38.593 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.593 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.593 [2024-11-21 04:07:38.562436] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:38.593 [2024-11-21 04:07:38.562538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:38.593 [2024-11-21 04:07:38.562578] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:38.593 [2024-11-21 04:07:38.562625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:38.593 [2024-11-21 04:07:38.562663] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:38.593 [2024-11-21 04:07:38.562712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.852 [2024-11-21 04:07:38.586241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.852 BaseBdev1 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.852 [ 00:09:38.852 { 00:09:38.852 "name": "BaseBdev1", 00:09:38.852 "aliases": [ 00:09:38.852 "cde0fd8c-c582-4c99-a26f-9ed9f5064680" 00:09:38.852 ], 00:09:38.852 "product_name": "Malloc disk", 00:09:38.852 "block_size": 512, 00:09:38.852 "num_blocks": 65536, 00:09:38.852 "uuid": "cde0fd8c-c582-4c99-a26f-9ed9f5064680", 00:09:38.852 "assigned_rate_limits": { 00:09:38.852 "rw_ios_per_sec": 0, 00:09:38.852 "rw_mbytes_per_sec": 0, 00:09:38.852 "r_mbytes_per_sec": 0, 00:09:38.852 "w_mbytes_per_sec": 0 00:09:38.852 }, 00:09:38.852 "claimed": true, 00:09:38.852 "claim_type": "exclusive_write", 00:09:38.852 "zoned": false, 00:09:38.852 "supported_io_types": { 00:09:38.852 "read": true, 00:09:38.852 "write": true, 00:09:38.852 "unmap": true, 00:09:38.852 "flush": true, 00:09:38.852 "reset": true, 00:09:38.852 "nvme_admin": false, 00:09:38.852 "nvme_io": false, 00:09:38.852 "nvme_io_md": false, 00:09:38.852 "write_zeroes": true, 00:09:38.852 "zcopy": true, 00:09:38.852 "get_zone_info": false, 00:09:38.852 "zone_management": false, 00:09:38.852 "zone_append": false, 00:09:38.852 "compare": false, 00:09:38.852 "compare_and_write": false, 00:09:38.852 "abort": true, 00:09:38.852 "seek_hole": false, 00:09:38.852 "seek_data": false, 00:09:38.852 "copy": true, 00:09:38.852 "nvme_iov_md": false 00:09:38.852 }, 00:09:38.852 "memory_domains": [ 00:09:38.852 { 00:09:38.852 "dma_device_id": "system", 00:09:38.852 "dma_device_type": 1 00:09:38.852 }, 00:09:38.852 { 00:09:38.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.852 "dma_device_type": 2 00:09:38.852 } 00:09:38.852 ], 00:09:38.852 "driver_specific": {} 00:09:38.852 } 00:09:38.852 ] 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.852 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.852 "name": "Existed_Raid", 00:09:38.852 "uuid": "6ed8727d-3724-4167-97e4-d13296b60da3", 00:09:38.852 "strip_size_kb": 0, 00:09:38.852 "state": "configuring", 00:09:38.852 "raid_level": "raid1", 00:09:38.852 "superblock": true, 00:09:38.853 "num_base_bdevs": 3, 00:09:38.853 "num_base_bdevs_discovered": 1, 00:09:38.853 "num_base_bdevs_operational": 3, 00:09:38.853 "base_bdevs_list": [ 00:09:38.853 { 00:09:38.853 "name": "BaseBdev1", 00:09:38.853 "uuid": "cde0fd8c-c582-4c99-a26f-9ed9f5064680", 00:09:38.853 "is_configured": true, 00:09:38.853 "data_offset": 2048, 00:09:38.853 "data_size": 63488 00:09:38.853 }, 00:09:38.853 { 00:09:38.853 "name": "BaseBdev2", 00:09:38.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.853 "is_configured": false, 00:09:38.853 "data_offset": 0, 00:09:38.853 "data_size": 0 00:09:38.853 }, 00:09:38.853 { 00:09:38.853 "name": "BaseBdev3", 00:09:38.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.853 "is_configured": false, 00:09:38.853 "data_offset": 0, 00:09:38.853 "data_size": 0 00:09:38.853 } 00:09:38.853 ] 00:09:38.853 }' 00:09:38.853 04:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.853 04:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.114 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:39.114 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.114 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.387 [2024-11-21 04:07:39.085453] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:39.387 [2024-11-21 04:07:39.085607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:39.387 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.388 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:39.388 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.388 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.388 [2024-11-21 04:07:39.097469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:39.388 [2024-11-21 04:07:39.099936] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.388 [2024-11-21 04:07:39.099982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:39.388 [2024-11-21 04:07:39.099993] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:39.388 [2024-11-21 04:07:39.100004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:39.388 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.388 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:39.388 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:39.388 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:39.388 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.388 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.388 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.388 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.388 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.388 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.388 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.388 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.388 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.388 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.388 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.388 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.388 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.388 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.389 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.389 "name": "Existed_Raid", 00:09:39.389 "uuid": "8e7640f0-bfc2-4ab6-b706-c9ceb84ab6d6", 00:09:39.389 "strip_size_kb": 0, 00:09:39.389 "state": "configuring", 00:09:39.389 "raid_level": "raid1", 00:09:39.389 "superblock": true, 00:09:39.389 "num_base_bdevs": 3, 00:09:39.389 "num_base_bdevs_discovered": 1, 00:09:39.389 "num_base_bdevs_operational": 3, 00:09:39.389 "base_bdevs_list": [ 00:09:39.389 { 00:09:39.389 "name": "BaseBdev1", 00:09:39.389 "uuid": "cde0fd8c-c582-4c99-a26f-9ed9f5064680", 00:09:39.389 "is_configured": true, 00:09:39.389 "data_offset": 2048, 00:09:39.389 "data_size": 63488 00:09:39.389 }, 00:09:39.389 { 00:09:39.389 "name": "BaseBdev2", 00:09:39.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.389 "is_configured": false, 00:09:39.389 "data_offset": 0, 00:09:39.389 "data_size": 0 00:09:39.389 }, 00:09:39.389 { 00:09:39.389 "name": "BaseBdev3", 00:09:39.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.389 "is_configured": false, 00:09:39.389 "data_offset": 0, 00:09:39.389 "data_size": 0 00:09:39.389 } 00:09:39.389 ] 00:09:39.389 }' 00:09:39.390 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.390 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.651 [2024-11-21 04:07:39.521484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:39.651 BaseBdev2 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.651 [ 00:09:39.651 { 00:09:39.651 "name": "BaseBdev2", 00:09:39.651 "aliases": [ 00:09:39.651 "ead0354f-e7be-4f56-95a4-5d6fed77a96c" 00:09:39.651 ], 00:09:39.651 "product_name": "Malloc disk", 00:09:39.651 "block_size": 512, 00:09:39.651 "num_blocks": 65536, 00:09:39.651 "uuid": "ead0354f-e7be-4f56-95a4-5d6fed77a96c", 00:09:39.651 "assigned_rate_limits": { 00:09:39.651 "rw_ios_per_sec": 0, 00:09:39.651 "rw_mbytes_per_sec": 0, 00:09:39.651 "r_mbytes_per_sec": 0, 00:09:39.651 "w_mbytes_per_sec": 0 00:09:39.651 }, 00:09:39.651 "claimed": true, 00:09:39.651 "claim_type": "exclusive_write", 00:09:39.651 "zoned": false, 00:09:39.651 "supported_io_types": { 00:09:39.651 "read": true, 00:09:39.651 "write": true, 00:09:39.651 "unmap": true, 00:09:39.651 "flush": true, 00:09:39.651 "reset": true, 00:09:39.651 "nvme_admin": false, 00:09:39.651 "nvme_io": false, 00:09:39.651 "nvme_io_md": false, 00:09:39.651 "write_zeroes": true, 00:09:39.651 "zcopy": true, 00:09:39.651 "get_zone_info": false, 00:09:39.651 "zone_management": false, 00:09:39.651 "zone_append": false, 00:09:39.651 "compare": false, 00:09:39.651 "compare_and_write": false, 00:09:39.651 "abort": true, 00:09:39.651 "seek_hole": false, 00:09:39.651 "seek_data": false, 00:09:39.651 "copy": true, 00:09:39.651 "nvme_iov_md": false 00:09:39.651 }, 00:09:39.651 "memory_domains": [ 00:09:39.651 { 00:09:39.651 "dma_device_id": "system", 00:09:39.651 "dma_device_type": 1 00:09:39.651 }, 00:09:39.651 { 00:09:39.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.651 "dma_device_type": 2 00:09:39.651 } 00:09:39.651 ], 00:09:39.651 "driver_specific": {} 00:09:39.651 } 00:09:39.651 ] 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.651 "name": "Existed_Raid", 00:09:39.651 "uuid": "8e7640f0-bfc2-4ab6-b706-c9ceb84ab6d6", 00:09:39.651 "strip_size_kb": 0, 00:09:39.651 "state": "configuring", 00:09:39.651 "raid_level": "raid1", 00:09:39.651 "superblock": true, 00:09:39.651 "num_base_bdevs": 3, 00:09:39.651 "num_base_bdevs_discovered": 2, 00:09:39.651 "num_base_bdevs_operational": 3, 00:09:39.651 "base_bdevs_list": [ 00:09:39.651 { 00:09:39.651 "name": "BaseBdev1", 00:09:39.651 "uuid": "cde0fd8c-c582-4c99-a26f-9ed9f5064680", 00:09:39.651 "is_configured": true, 00:09:39.651 "data_offset": 2048, 00:09:39.651 "data_size": 63488 00:09:39.651 }, 00:09:39.651 { 00:09:39.651 "name": "BaseBdev2", 00:09:39.651 "uuid": "ead0354f-e7be-4f56-95a4-5d6fed77a96c", 00:09:39.651 "is_configured": true, 00:09:39.651 "data_offset": 2048, 00:09:39.651 "data_size": 63488 00:09:39.651 }, 00:09:39.651 { 00:09:39.651 "name": "BaseBdev3", 00:09:39.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.651 "is_configured": false, 00:09:39.651 "data_offset": 0, 00:09:39.651 "data_size": 0 00:09:39.651 } 00:09:39.651 ] 00:09:39.651 }' 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.651 04:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.219 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.220 BaseBdev3 00:09:40.220 [2024-11-21 04:07:40.080479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:40.220 [2024-11-21 04:07:40.080724] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:40.220 [2024-11-21 04:07:40.080746] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:40.220 [2024-11-21 04:07:40.081115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:40.220 [2024-11-21 04:07:40.081309] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:40.220 [2024-11-21 04:07:40.081328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:40.220 [2024-11-21 04:07:40.081579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.220 [ 00:09:40.220 { 00:09:40.220 "name": "BaseBdev3", 00:09:40.220 "aliases": [ 00:09:40.220 "c585f4a2-3f02-4122-ae12-e13abbc9ad39" 00:09:40.220 ], 00:09:40.220 "product_name": "Malloc disk", 00:09:40.220 "block_size": 512, 00:09:40.220 "num_blocks": 65536, 00:09:40.220 "uuid": "c585f4a2-3f02-4122-ae12-e13abbc9ad39", 00:09:40.220 "assigned_rate_limits": { 00:09:40.220 "rw_ios_per_sec": 0, 00:09:40.220 "rw_mbytes_per_sec": 0, 00:09:40.220 "r_mbytes_per_sec": 0, 00:09:40.220 "w_mbytes_per_sec": 0 00:09:40.220 }, 00:09:40.220 "claimed": true, 00:09:40.220 "claim_type": "exclusive_write", 00:09:40.220 "zoned": false, 00:09:40.220 "supported_io_types": { 00:09:40.220 "read": true, 00:09:40.220 "write": true, 00:09:40.220 "unmap": true, 00:09:40.220 "flush": true, 00:09:40.220 "reset": true, 00:09:40.220 "nvme_admin": false, 00:09:40.220 "nvme_io": false, 00:09:40.220 "nvme_io_md": false, 00:09:40.220 "write_zeroes": true, 00:09:40.220 "zcopy": true, 00:09:40.220 "get_zone_info": false, 00:09:40.220 "zone_management": false, 00:09:40.220 "zone_append": false, 00:09:40.220 "compare": false, 00:09:40.220 "compare_and_write": false, 00:09:40.220 "abort": true, 00:09:40.220 "seek_hole": false, 00:09:40.220 "seek_data": false, 00:09:40.220 "copy": true, 00:09:40.220 "nvme_iov_md": false 00:09:40.220 }, 00:09:40.220 "memory_domains": [ 00:09:40.220 { 00:09:40.220 "dma_device_id": "system", 00:09:40.220 "dma_device_type": 1 00:09:40.220 }, 00:09:40.220 { 00:09:40.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.220 "dma_device_type": 2 00:09:40.220 } 00:09:40.220 ], 00:09:40.220 "driver_specific": {} 00:09:40.220 } 00:09:40.220 ] 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.220 "name": "Existed_Raid", 00:09:40.220 "uuid": "8e7640f0-bfc2-4ab6-b706-c9ceb84ab6d6", 00:09:40.220 "strip_size_kb": 0, 00:09:40.220 "state": "online", 00:09:40.220 "raid_level": "raid1", 00:09:40.220 "superblock": true, 00:09:40.220 "num_base_bdevs": 3, 00:09:40.220 "num_base_bdevs_discovered": 3, 00:09:40.220 "num_base_bdevs_operational": 3, 00:09:40.220 "base_bdevs_list": [ 00:09:40.220 { 00:09:40.220 "name": "BaseBdev1", 00:09:40.220 "uuid": "cde0fd8c-c582-4c99-a26f-9ed9f5064680", 00:09:40.220 "is_configured": true, 00:09:40.220 "data_offset": 2048, 00:09:40.220 "data_size": 63488 00:09:40.220 }, 00:09:40.220 { 00:09:40.220 "name": "BaseBdev2", 00:09:40.220 "uuid": "ead0354f-e7be-4f56-95a4-5d6fed77a96c", 00:09:40.220 "is_configured": true, 00:09:40.220 "data_offset": 2048, 00:09:40.220 "data_size": 63488 00:09:40.220 }, 00:09:40.220 { 00:09:40.220 "name": "BaseBdev3", 00:09:40.220 "uuid": "c585f4a2-3f02-4122-ae12-e13abbc9ad39", 00:09:40.220 "is_configured": true, 00:09:40.220 "data_offset": 2048, 00:09:40.220 "data_size": 63488 00:09:40.220 } 00:09:40.220 ] 00:09:40.220 }' 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.220 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.789 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:40.789 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:40.789 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:40.789 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:40.789 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:40.789 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:40.789 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:40.789 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.789 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:40.789 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.789 [2024-11-21 04:07:40.560059] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.789 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.789 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:40.789 "name": "Existed_Raid", 00:09:40.789 "aliases": [ 00:09:40.789 "8e7640f0-bfc2-4ab6-b706-c9ceb84ab6d6" 00:09:40.789 ], 00:09:40.789 "product_name": "Raid Volume", 00:09:40.789 "block_size": 512, 00:09:40.789 "num_blocks": 63488, 00:09:40.789 "uuid": "8e7640f0-bfc2-4ab6-b706-c9ceb84ab6d6", 00:09:40.789 "assigned_rate_limits": { 00:09:40.789 "rw_ios_per_sec": 0, 00:09:40.789 "rw_mbytes_per_sec": 0, 00:09:40.789 "r_mbytes_per_sec": 0, 00:09:40.789 "w_mbytes_per_sec": 0 00:09:40.789 }, 00:09:40.789 "claimed": false, 00:09:40.789 "zoned": false, 00:09:40.789 "supported_io_types": { 00:09:40.789 "read": true, 00:09:40.790 "write": true, 00:09:40.790 "unmap": false, 00:09:40.790 "flush": false, 00:09:40.790 "reset": true, 00:09:40.790 "nvme_admin": false, 00:09:40.790 "nvme_io": false, 00:09:40.790 "nvme_io_md": false, 00:09:40.790 "write_zeroes": true, 00:09:40.790 "zcopy": false, 00:09:40.790 "get_zone_info": false, 00:09:40.790 "zone_management": false, 00:09:40.790 "zone_append": false, 00:09:40.790 "compare": false, 00:09:40.790 "compare_and_write": false, 00:09:40.790 "abort": false, 00:09:40.790 "seek_hole": false, 00:09:40.790 "seek_data": false, 00:09:40.790 "copy": false, 00:09:40.790 "nvme_iov_md": false 00:09:40.790 }, 00:09:40.790 "memory_domains": [ 00:09:40.790 { 00:09:40.790 "dma_device_id": "system", 00:09:40.790 "dma_device_type": 1 00:09:40.790 }, 00:09:40.790 { 00:09:40.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.790 "dma_device_type": 2 00:09:40.790 }, 00:09:40.790 { 00:09:40.790 "dma_device_id": "system", 00:09:40.790 "dma_device_type": 1 00:09:40.790 }, 00:09:40.790 { 00:09:40.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.790 "dma_device_type": 2 00:09:40.790 }, 00:09:40.790 { 00:09:40.790 "dma_device_id": "system", 00:09:40.790 "dma_device_type": 1 00:09:40.790 }, 00:09:40.790 { 00:09:40.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.790 "dma_device_type": 2 00:09:40.790 } 00:09:40.790 ], 00:09:40.790 "driver_specific": { 00:09:40.790 "raid": { 00:09:40.790 "uuid": "8e7640f0-bfc2-4ab6-b706-c9ceb84ab6d6", 00:09:40.790 "strip_size_kb": 0, 00:09:40.790 "state": "online", 00:09:40.790 "raid_level": "raid1", 00:09:40.790 "superblock": true, 00:09:40.790 "num_base_bdevs": 3, 00:09:40.790 "num_base_bdevs_discovered": 3, 00:09:40.790 "num_base_bdevs_operational": 3, 00:09:40.790 "base_bdevs_list": [ 00:09:40.790 { 00:09:40.790 "name": "BaseBdev1", 00:09:40.790 "uuid": "cde0fd8c-c582-4c99-a26f-9ed9f5064680", 00:09:40.790 "is_configured": true, 00:09:40.790 "data_offset": 2048, 00:09:40.790 "data_size": 63488 00:09:40.790 }, 00:09:40.790 { 00:09:40.790 "name": "BaseBdev2", 00:09:40.790 "uuid": "ead0354f-e7be-4f56-95a4-5d6fed77a96c", 00:09:40.790 "is_configured": true, 00:09:40.790 "data_offset": 2048, 00:09:40.790 "data_size": 63488 00:09:40.790 }, 00:09:40.790 { 00:09:40.790 "name": "BaseBdev3", 00:09:40.790 "uuid": "c585f4a2-3f02-4122-ae12-e13abbc9ad39", 00:09:40.790 "is_configured": true, 00:09:40.790 "data_offset": 2048, 00:09:40.790 "data_size": 63488 00:09:40.790 } 00:09:40.790 ] 00:09:40.790 } 00:09:40.790 } 00:09:40.790 }' 00:09:40.790 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:40.790 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:40.790 BaseBdev2 00:09:40.790 BaseBdev3' 00:09:40.790 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.790 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:40.790 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.790 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:40.790 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.790 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.790 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.790 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.790 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.790 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.790 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.790 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.790 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:40.790 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.790 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.050 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.050 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.050 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.050 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.050 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.050 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:41.050 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.050 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.050 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.050 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.050 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.050 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:41.050 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.051 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.051 [2024-11-21 04:07:40.823310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:41.051 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.051 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:41.051 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:41.051 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:41.051 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:41.051 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:41.051 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:41.051 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.051 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.051 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.051 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.051 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:41.051 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.051 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.051 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.051 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.051 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.051 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.051 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.051 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.051 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.051 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.051 "name": "Existed_Raid", 00:09:41.051 "uuid": "8e7640f0-bfc2-4ab6-b706-c9ceb84ab6d6", 00:09:41.051 "strip_size_kb": 0, 00:09:41.051 "state": "online", 00:09:41.051 "raid_level": "raid1", 00:09:41.051 "superblock": true, 00:09:41.051 "num_base_bdevs": 3, 00:09:41.051 "num_base_bdevs_discovered": 2, 00:09:41.051 "num_base_bdevs_operational": 2, 00:09:41.051 "base_bdevs_list": [ 00:09:41.051 { 00:09:41.051 "name": null, 00:09:41.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.051 "is_configured": false, 00:09:41.051 "data_offset": 0, 00:09:41.051 "data_size": 63488 00:09:41.051 }, 00:09:41.051 { 00:09:41.051 "name": "BaseBdev2", 00:09:41.051 "uuid": "ead0354f-e7be-4f56-95a4-5d6fed77a96c", 00:09:41.051 "is_configured": true, 00:09:41.051 "data_offset": 2048, 00:09:41.051 "data_size": 63488 00:09:41.051 }, 00:09:41.051 { 00:09:41.051 "name": "BaseBdev3", 00:09:41.051 "uuid": "c585f4a2-3f02-4122-ae12-e13abbc9ad39", 00:09:41.051 "is_configured": true, 00:09:41.051 "data_offset": 2048, 00:09:41.051 "data_size": 63488 00:09:41.051 } 00:09:41.051 ] 00:09:41.051 }' 00:09:41.051 04:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.051 04:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.310 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:41.310 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:41.310 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.310 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:41.310 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.310 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.571 [2024-11-21 04:07:41.319909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.571 [2024-11-21 04:07:41.392855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:41.571 [2024-11-21 04:07:41.392979] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:41.571 [2024-11-21 04:07:41.414261] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.571 [2024-11-21 04:07:41.414429] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.571 [2024-11-21 04:07:41.414527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.571 BaseBdev2 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.571 [ 00:09:41.571 { 00:09:41.571 "name": "BaseBdev2", 00:09:41.571 "aliases": [ 00:09:41.571 "ae1db4d7-51e9-4920-81ab-419d4256c07e" 00:09:41.571 ], 00:09:41.571 "product_name": "Malloc disk", 00:09:41.571 "block_size": 512, 00:09:41.571 "num_blocks": 65536, 00:09:41.571 "uuid": "ae1db4d7-51e9-4920-81ab-419d4256c07e", 00:09:41.571 "assigned_rate_limits": { 00:09:41.571 "rw_ios_per_sec": 0, 00:09:41.571 "rw_mbytes_per_sec": 0, 00:09:41.571 "r_mbytes_per_sec": 0, 00:09:41.571 "w_mbytes_per_sec": 0 00:09:41.571 }, 00:09:41.571 "claimed": false, 00:09:41.571 "zoned": false, 00:09:41.571 "supported_io_types": { 00:09:41.571 "read": true, 00:09:41.571 "write": true, 00:09:41.571 "unmap": true, 00:09:41.571 "flush": true, 00:09:41.571 "reset": true, 00:09:41.571 "nvme_admin": false, 00:09:41.571 "nvme_io": false, 00:09:41.571 "nvme_io_md": false, 00:09:41.571 "write_zeroes": true, 00:09:41.571 "zcopy": true, 00:09:41.571 "get_zone_info": false, 00:09:41.571 "zone_management": false, 00:09:41.571 "zone_append": false, 00:09:41.571 "compare": false, 00:09:41.571 "compare_and_write": false, 00:09:41.571 "abort": true, 00:09:41.571 "seek_hole": false, 00:09:41.571 "seek_data": false, 00:09:41.571 "copy": true, 00:09:41.571 "nvme_iov_md": false 00:09:41.571 }, 00:09:41.571 "memory_domains": [ 00:09:41.571 { 00:09:41.571 "dma_device_id": "system", 00:09:41.571 "dma_device_type": 1 00:09:41.571 }, 00:09:41.571 { 00:09:41.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.571 "dma_device_type": 2 00:09:41.571 } 00:09:41.571 ], 00:09:41.571 "driver_specific": {} 00:09:41.571 } 00:09:41.571 ] 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.571 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.831 BaseBdev3 00:09:41.831 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.831 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:41.831 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:41.831 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:41.831 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:41.831 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:41.831 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:41.831 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:41.831 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.831 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.831 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.831 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:41.831 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.831 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.831 [ 00:09:41.831 { 00:09:41.831 "name": "BaseBdev3", 00:09:41.831 "aliases": [ 00:09:41.831 "478a2d06-a3e7-4851-a395-ec6d83ba5d4a" 00:09:41.831 ], 00:09:41.831 "product_name": "Malloc disk", 00:09:41.831 "block_size": 512, 00:09:41.831 "num_blocks": 65536, 00:09:41.831 "uuid": "478a2d06-a3e7-4851-a395-ec6d83ba5d4a", 00:09:41.831 "assigned_rate_limits": { 00:09:41.831 "rw_ios_per_sec": 0, 00:09:41.831 "rw_mbytes_per_sec": 0, 00:09:41.831 "r_mbytes_per_sec": 0, 00:09:41.831 "w_mbytes_per_sec": 0 00:09:41.831 }, 00:09:41.831 "claimed": false, 00:09:41.831 "zoned": false, 00:09:41.831 "supported_io_types": { 00:09:41.831 "read": true, 00:09:41.831 "write": true, 00:09:41.831 "unmap": true, 00:09:41.831 "flush": true, 00:09:41.831 "reset": true, 00:09:41.831 "nvme_admin": false, 00:09:41.831 "nvme_io": false, 00:09:41.831 "nvme_io_md": false, 00:09:41.831 "write_zeroes": true, 00:09:41.831 "zcopy": true, 00:09:41.831 "get_zone_info": false, 00:09:41.831 "zone_management": false, 00:09:41.831 "zone_append": false, 00:09:41.831 "compare": false, 00:09:41.831 "compare_and_write": false, 00:09:41.831 "abort": true, 00:09:41.831 "seek_hole": false, 00:09:41.831 "seek_data": false, 00:09:41.831 "copy": true, 00:09:41.831 "nvme_iov_md": false 00:09:41.831 }, 00:09:41.831 "memory_domains": [ 00:09:41.831 { 00:09:41.831 "dma_device_id": "system", 00:09:41.831 "dma_device_type": 1 00:09:41.831 }, 00:09:41.831 { 00:09:41.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.831 "dma_device_type": 2 00:09:41.831 } 00:09:41.831 ], 00:09:41.831 "driver_specific": {} 00:09:41.832 } 00:09:41.832 ] 00:09:41.832 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.832 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:41.832 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:41.832 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:41.832 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:41.832 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.832 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.832 [2024-11-21 04:07:41.591124] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:41.832 [2024-11-21 04:07:41.591247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:41.832 [2024-11-21 04:07:41.591304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:41.832 [2024-11-21 04:07:41.593560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:41.832 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.832 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:41.832 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.832 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.832 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.832 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.832 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.832 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.832 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.832 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.832 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.832 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.832 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.832 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.832 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.832 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.832 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.832 "name": "Existed_Raid", 00:09:41.832 "uuid": "fdc083ce-39ff-48c8-a31a-c2d3ec0f878a", 00:09:41.832 "strip_size_kb": 0, 00:09:41.832 "state": "configuring", 00:09:41.832 "raid_level": "raid1", 00:09:41.832 "superblock": true, 00:09:41.832 "num_base_bdevs": 3, 00:09:41.832 "num_base_bdevs_discovered": 2, 00:09:41.832 "num_base_bdevs_operational": 3, 00:09:41.832 "base_bdevs_list": [ 00:09:41.832 { 00:09:41.832 "name": "BaseBdev1", 00:09:41.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.832 "is_configured": false, 00:09:41.832 "data_offset": 0, 00:09:41.832 "data_size": 0 00:09:41.832 }, 00:09:41.832 { 00:09:41.832 "name": "BaseBdev2", 00:09:41.832 "uuid": "ae1db4d7-51e9-4920-81ab-419d4256c07e", 00:09:41.832 "is_configured": true, 00:09:41.832 "data_offset": 2048, 00:09:41.832 "data_size": 63488 00:09:41.832 }, 00:09:41.832 { 00:09:41.832 "name": "BaseBdev3", 00:09:41.832 "uuid": "478a2d06-a3e7-4851-a395-ec6d83ba5d4a", 00:09:41.832 "is_configured": true, 00:09:41.832 "data_offset": 2048, 00:09:41.832 "data_size": 63488 00:09:41.832 } 00:09:41.832 ] 00:09:41.832 }' 00:09:41.832 04:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.832 04:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.091 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:42.091 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.091 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.091 [2024-11-21 04:07:42.038402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:42.091 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.091 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:42.091 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.091 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.091 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.091 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.091 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.091 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.091 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.091 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.091 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.091 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.091 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.091 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.091 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.350 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.350 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.350 "name": "Existed_Raid", 00:09:42.350 "uuid": "fdc083ce-39ff-48c8-a31a-c2d3ec0f878a", 00:09:42.350 "strip_size_kb": 0, 00:09:42.350 "state": "configuring", 00:09:42.350 "raid_level": "raid1", 00:09:42.350 "superblock": true, 00:09:42.350 "num_base_bdevs": 3, 00:09:42.350 "num_base_bdevs_discovered": 1, 00:09:42.350 "num_base_bdevs_operational": 3, 00:09:42.350 "base_bdevs_list": [ 00:09:42.350 { 00:09:42.350 "name": "BaseBdev1", 00:09:42.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.350 "is_configured": false, 00:09:42.350 "data_offset": 0, 00:09:42.350 "data_size": 0 00:09:42.350 }, 00:09:42.350 { 00:09:42.350 "name": null, 00:09:42.350 "uuid": "ae1db4d7-51e9-4920-81ab-419d4256c07e", 00:09:42.350 "is_configured": false, 00:09:42.350 "data_offset": 0, 00:09:42.350 "data_size": 63488 00:09:42.350 }, 00:09:42.350 { 00:09:42.350 "name": "BaseBdev3", 00:09:42.350 "uuid": "478a2d06-a3e7-4851-a395-ec6d83ba5d4a", 00:09:42.350 "is_configured": true, 00:09:42.350 "data_offset": 2048, 00:09:42.350 "data_size": 63488 00:09:42.350 } 00:09:42.350 ] 00:09:42.350 }' 00:09:42.350 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.350 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.611 [2024-11-21 04:07:42.486428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.611 BaseBdev1 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.611 [ 00:09:42.611 { 00:09:42.611 "name": "BaseBdev1", 00:09:42.611 "aliases": [ 00:09:42.611 "7da0140d-8dcd-4cc8-b3fb-17de874f56bf" 00:09:42.611 ], 00:09:42.611 "product_name": "Malloc disk", 00:09:42.611 "block_size": 512, 00:09:42.611 "num_blocks": 65536, 00:09:42.611 "uuid": "7da0140d-8dcd-4cc8-b3fb-17de874f56bf", 00:09:42.611 "assigned_rate_limits": { 00:09:42.611 "rw_ios_per_sec": 0, 00:09:42.611 "rw_mbytes_per_sec": 0, 00:09:42.611 "r_mbytes_per_sec": 0, 00:09:42.611 "w_mbytes_per_sec": 0 00:09:42.611 }, 00:09:42.611 "claimed": true, 00:09:42.611 "claim_type": "exclusive_write", 00:09:42.611 "zoned": false, 00:09:42.611 "supported_io_types": { 00:09:42.611 "read": true, 00:09:42.611 "write": true, 00:09:42.611 "unmap": true, 00:09:42.611 "flush": true, 00:09:42.611 "reset": true, 00:09:42.611 "nvme_admin": false, 00:09:42.611 "nvme_io": false, 00:09:42.611 "nvme_io_md": false, 00:09:42.611 "write_zeroes": true, 00:09:42.611 "zcopy": true, 00:09:42.611 "get_zone_info": false, 00:09:42.611 "zone_management": false, 00:09:42.611 "zone_append": false, 00:09:42.611 "compare": false, 00:09:42.611 "compare_and_write": false, 00:09:42.611 "abort": true, 00:09:42.611 "seek_hole": false, 00:09:42.611 "seek_data": false, 00:09:42.611 "copy": true, 00:09:42.611 "nvme_iov_md": false 00:09:42.611 }, 00:09:42.611 "memory_domains": [ 00:09:42.611 { 00:09:42.611 "dma_device_id": "system", 00:09:42.611 "dma_device_type": 1 00:09:42.611 }, 00:09:42.611 { 00:09:42.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.611 "dma_device_type": 2 00:09:42.611 } 00:09:42.611 ], 00:09:42.611 "driver_specific": {} 00:09:42.611 } 00:09:42.611 ] 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.611 "name": "Existed_Raid", 00:09:42.611 "uuid": "fdc083ce-39ff-48c8-a31a-c2d3ec0f878a", 00:09:42.611 "strip_size_kb": 0, 00:09:42.611 "state": "configuring", 00:09:42.611 "raid_level": "raid1", 00:09:42.611 "superblock": true, 00:09:42.611 "num_base_bdevs": 3, 00:09:42.611 "num_base_bdevs_discovered": 2, 00:09:42.611 "num_base_bdevs_operational": 3, 00:09:42.611 "base_bdevs_list": [ 00:09:42.611 { 00:09:42.611 "name": "BaseBdev1", 00:09:42.611 "uuid": "7da0140d-8dcd-4cc8-b3fb-17de874f56bf", 00:09:42.611 "is_configured": true, 00:09:42.611 "data_offset": 2048, 00:09:42.611 "data_size": 63488 00:09:42.611 }, 00:09:42.611 { 00:09:42.611 "name": null, 00:09:42.611 "uuid": "ae1db4d7-51e9-4920-81ab-419d4256c07e", 00:09:42.611 "is_configured": false, 00:09:42.611 "data_offset": 0, 00:09:42.611 "data_size": 63488 00:09:42.611 }, 00:09:42.611 { 00:09:42.611 "name": "BaseBdev3", 00:09:42.611 "uuid": "478a2d06-a3e7-4851-a395-ec6d83ba5d4a", 00:09:42.611 "is_configured": true, 00:09:42.611 "data_offset": 2048, 00:09:42.611 "data_size": 63488 00:09:42.611 } 00:09:42.611 ] 00:09:42.611 }' 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.611 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.180 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.180 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:43.180 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.180 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.180 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.180 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:43.180 04:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:43.180 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.180 04:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.180 [2024-11-21 04:07:43.001646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:43.180 04:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.180 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:43.180 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.180 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.180 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.180 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.180 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.180 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.180 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.180 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.180 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.180 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.180 04:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.180 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.180 04:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.180 04:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.180 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.180 "name": "Existed_Raid", 00:09:43.180 "uuid": "fdc083ce-39ff-48c8-a31a-c2d3ec0f878a", 00:09:43.180 "strip_size_kb": 0, 00:09:43.180 "state": "configuring", 00:09:43.180 "raid_level": "raid1", 00:09:43.180 "superblock": true, 00:09:43.180 "num_base_bdevs": 3, 00:09:43.180 "num_base_bdevs_discovered": 1, 00:09:43.180 "num_base_bdevs_operational": 3, 00:09:43.180 "base_bdevs_list": [ 00:09:43.180 { 00:09:43.181 "name": "BaseBdev1", 00:09:43.181 "uuid": "7da0140d-8dcd-4cc8-b3fb-17de874f56bf", 00:09:43.181 "is_configured": true, 00:09:43.181 "data_offset": 2048, 00:09:43.181 "data_size": 63488 00:09:43.181 }, 00:09:43.181 { 00:09:43.181 "name": null, 00:09:43.181 "uuid": "ae1db4d7-51e9-4920-81ab-419d4256c07e", 00:09:43.181 "is_configured": false, 00:09:43.181 "data_offset": 0, 00:09:43.181 "data_size": 63488 00:09:43.181 }, 00:09:43.181 { 00:09:43.181 "name": null, 00:09:43.181 "uuid": "478a2d06-a3e7-4851-a395-ec6d83ba5d4a", 00:09:43.181 "is_configured": false, 00:09:43.181 "data_offset": 0, 00:09:43.181 "data_size": 63488 00:09:43.181 } 00:09:43.181 ] 00:09:43.181 }' 00:09:43.181 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.181 04:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.748 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:43.748 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.748 04:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.748 04:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.748 04:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.748 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:43.748 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:43.748 04:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.748 04:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.748 [2024-11-21 04:07:43.504843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:43.748 04:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.748 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:43.748 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.748 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.748 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.748 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.748 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.748 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.748 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.748 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.748 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.748 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.748 04:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.748 04:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.748 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.748 04:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.748 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.748 "name": "Existed_Raid", 00:09:43.748 "uuid": "fdc083ce-39ff-48c8-a31a-c2d3ec0f878a", 00:09:43.748 "strip_size_kb": 0, 00:09:43.748 "state": "configuring", 00:09:43.748 "raid_level": "raid1", 00:09:43.748 "superblock": true, 00:09:43.749 "num_base_bdevs": 3, 00:09:43.749 "num_base_bdevs_discovered": 2, 00:09:43.749 "num_base_bdevs_operational": 3, 00:09:43.749 "base_bdevs_list": [ 00:09:43.749 { 00:09:43.749 "name": "BaseBdev1", 00:09:43.749 "uuid": "7da0140d-8dcd-4cc8-b3fb-17de874f56bf", 00:09:43.749 "is_configured": true, 00:09:43.749 "data_offset": 2048, 00:09:43.749 "data_size": 63488 00:09:43.749 }, 00:09:43.749 { 00:09:43.749 "name": null, 00:09:43.749 "uuid": "ae1db4d7-51e9-4920-81ab-419d4256c07e", 00:09:43.749 "is_configured": false, 00:09:43.749 "data_offset": 0, 00:09:43.749 "data_size": 63488 00:09:43.749 }, 00:09:43.749 { 00:09:43.749 "name": "BaseBdev3", 00:09:43.749 "uuid": "478a2d06-a3e7-4851-a395-ec6d83ba5d4a", 00:09:43.749 "is_configured": true, 00:09:43.749 "data_offset": 2048, 00:09:43.749 "data_size": 63488 00:09:43.749 } 00:09:43.749 ] 00:09:43.749 }' 00:09:43.749 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.749 04:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.008 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.008 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:44.008 04:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.008 04:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.008 04:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.008 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:44.008 04:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:44.008 04:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.008 04:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.269 [2024-11-21 04:07:43.984109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:44.269 04:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.269 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:44.269 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.269 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.269 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.269 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.269 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.269 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.269 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.269 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.269 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.269 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.269 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.269 04:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.269 04:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.269 04:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.269 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.269 "name": "Existed_Raid", 00:09:44.269 "uuid": "fdc083ce-39ff-48c8-a31a-c2d3ec0f878a", 00:09:44.269 "strip_size_kb": 0, 00:09:44.269 "state": "configuring", 00:09:44.269 "raid_level": "raid1", 00:09:44.269 "superblock": true, 00:09:44.269 "num_base_bdevs": 3, 00:09:44.269 "num_base_bdevs_discovered": 1, 00:09:44.269 "num_base_bdevs_operational": 3, 00:09:44.269 "base_bdevs_list": [ 00:09:44.269 { 00:09:44.269 "name": null, 00:09:44.269 "uuid": "7da0140d-8dcd-4cc8-b3fb-17de874f56bf", 00:09:44.269 "is_configured": false, 00:09:44.269 "data_offset": 0, 00:09:44.269 "data_size": 63488 00:09:44.269 }, 00:09:44.269 { 00:09:44.269 "name": null, 00:09:44.269 "uuid": "ae1db4d7-51e9-4920-81ab-419d4256c07e", 00:09:44.269 "is_configured": false, 00:09:44.269 "data_offset": 0, 00:09:44.269 "data_size": 63488 00:09:44.269 }, 00:09:44.269 { 00:09:44.269 "name": "BaseBdev3", 00:09:44.269 "uuid": "478a2d06-a3e7-4851-a395-ec6d83ba5d4a", 00:09:44.269 "is_configured": true, 00:09:44.269 "data_offset": 2048, 00:09:44.269 "data_size": 63488 00:09:44.269 } 00:09:44.269 ] 00:09:44.269 }' 00:09:44.269 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.269 04:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.528 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:44.528 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.528 04:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.528 04:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.528 04:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.528 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:44.528 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:44.528 04:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.528 04:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.528 [2024-11-21 04:07:44.483362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:44.528 04:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.528 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:44.528 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.528 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.528 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.528 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.528 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.528 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.528 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.528 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.528 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.528 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.528 04:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.528 04:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.528 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.787 04:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.787 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.787 "name": "Existed_Raid", 00:09:44.787 "uuid": "fdc083ce-39ff-48c8-a31a-c2d3ec0f878a", 00:09:44.787 "strip_size_kb": 0, 00:09:44.787 "state": "configuring", 00:09:44.787 "raid_level": "raid1", 00:09:44.787 "superblock": true, 00:09:44.787 "num_base_bdevs": 3, 00:09:44.787 "num_base_bdevs_discovered": 2, 00:09:44.787 "num_base_bdevs_operational": 3, 00:09:44.787 "base_bdevs_list": [ 00:09:44.787 { 00:09:44.787 "name": null, 00:09:44.787 "uuid": "7da0140d-8dcd-4cc8-b3fb-17de874f56bf", 00:09:44.787 "is_configured": false, 00:09:44.787 "data_offset": 0, 00:09:44.787 "data_size": 63488 00:09:44.787 }, 00:09:44.787 { 00:09:44.787 "name": "BaseBdev2", 00:09:44.787 "uuid": "ae1db4d7-51e9-4920-81ab-419d4256c07e", 00:09:44.787 "is_configured": true, 00:09:44.787 "data_offset": 2048, 00:09:44.787 "data_size": 63488 00:09:44.787 }, 00:09:44.787 { 00:09:44.787 "name": "BaseBdev3", 00:09:44.787 "uuid": "478a2d06-a3e7-4851-a395-ec6d83ba5d4a", 00:09:44.787 "is_configured": true, 00:09:44.787 "data_offset": 2048, 00:09:44.787 "data_size": 63488 00:09:44.787 } 00:09:44.787 ] 00:09:44.787 }' 00:09:44.787 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.787 04:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.046 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.046 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:45.046 04:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.046 04:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.046 04:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.046 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:45.046 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.046 04:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.046 04:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.046 04:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:45.046 04:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.046 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7da0140d-8dcd-4cc8-b3fb-17de874f56bf 00:09:45.046 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.046 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.308 [2024-11-21 04:07:45.035115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:45.308 [2024-11-21 04:07:45.035465] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:45.308 [2024-11-21 04:07:45.035485] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:45.308 [2024-11-21 04:07:45.035782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:09:45.308 NewBaseBdev 00:09:45.308 [2024-11-21 04:07:45.035910] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:45.308 [2024-11-21 04:07:45.035926] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:45.308 [2024-11-21 04:07:45.036064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.308 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.308 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:45.308 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:45.308 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:45.308 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:45.308 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:45.308 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:45.308 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:45.308 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.308 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.308 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.308 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:45.308 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.308 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.308 [ 00:09:45.308 { 00:09:45.308 "name": "NewBaseBdev", 00:09:45.308 "aliases": [ 00:09:45.308 "7da0140d-8dcd-4cc8-b3fb-17de874f56bf" 00:09:45.308 ], 00:09:45.308 "product_name": "Malloc disk", 00:09:45.308 "block_size": 512, 00:09:45.308 "num_blocks": 65536, 00:09:45.308 "uuid": "7da0140d-8dcd-4cc8-b3fb-17de874f56bf", 00:09:45.308 "assigned_rate_limits": { 00:09:45.308 "rw_ios_per_sec": 0, 00:09:45.308 "rw_mbytes_per_sec": 0, 00:09:45.308 "r_mbytes_per_sec": 0, 00:09:45.308 "w_mbytes_per_sec": 0 00:09:45.308 }, 00:09:45.308 "claimed": true, 00:09:45.308 "claim_type": "exclusive_write", 00:09:45.308 "zoned": false, 00:09:45.308 "supported_io_types": { 00:09:45.308 "read": true, 00:09:45.308 "write": true, 00:09:45.308 "unmap": true, 00:09:45.308 "flush": true, 00:09:45.308 "reset": true, 00:09:45.308 "nvme_admin": false, 00:09:45.308 "nvme_io": false, 00:09:45.309 "nvme_io_md": false, 00:09:45.309 "write_zeroes": true, 00:09:45.309 "zcopy": true, 00:09:45.309 "get_zone_info": false, 00:09:45.309 "zone_management": false, 00:09:45.309 "zone_append": false, 00:09:45.309 "compare": false, 00:09:45.309 "compare_and_write": false, 00:09:45.309 "abort": true, 00:09:45.309 "seek_hole": false, 00:09:45.309 "seek_data": false, 00:09:45.309 "copy": true, 00:09:45.309 "nvme_iov_md": false 00:09:45.309 }, 00:09:45.309 "memory_domains": [ 00:09:45.309 { 00:09:45.309 "dma_device_id": "system", 00:09:45.309 "dma_device_type": 1 00:09:45.309 }, 00:09:45.309 { 00:09:45.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.309 "dma_device_type": 2 00:09:45.309 } 00:09:45.309 ], 00:09:45.309 "driver_specific": {} 00:09:45.309 } 00:09:45.309 ] 00:09:45.309 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.309 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:45.309 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:45.309 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.309 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.309 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.309 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.309 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.309 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.309 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.309 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.309 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.309 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.309 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.309 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.309 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.309 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.309 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.309 "name": "Existed_Raid", 00:09:45.309 "uuid": "fdc083ce-39ff-48c8-a31a-c2d3ec0f878a", 00:09:45.309 "strip_size_kb": 0, 00:09:45.309 "state": "online", 00:09:45.309 "raid_level": "raid1", 00:09:45.309 "superblock": true, 00:09:45.309 "num_base_bdevs": 3, 00:09:45.309 "num_base_bdevs_discovered": 3, 00:09:45.309 "num_base_bdevs_operational": 3, 00:09:45.309 "base_bdevs_list": [ 00:09:45.309 { 00:09:45.309 "name": "NewBaseBdev", 00:09:45.309 "uuid": "7da0140d-8dcd-4cc8-b3fb-17de874f56bf", 00:09:45.309 "is_configured": true, 00:09:45.309 "data_offset": 2048, 00:09:45.309 "data_size": 63488 00:09:45.309 }, 00:09:45.309 { 00:09:45.309 "name": "BaseBdev2", 00:09:45.309 "uuid": "ae1db4d7-51e9-4920-81ab-419d4256c07e", 00:09:45.309 "is_configured": true, 00:09:45.309 "data_offset": 2048, 00:09:45.309 "data_size": 63488 00:09:45.309 }, 00:09:45.309 { 00:09:45.309 "name": "BaseBdev3", 00:09:45.309 "uuid": "478a2d06-a3e7-4851-a395-ec6d83ba5d4a", 00:09:45.309 "is_configured": true, 00:09:45.309 "data_offset": 2048, 00:09:45.309 "data_size": 63488 00:09:45.309 } 00:09:45.309 ] 00:09:45.309 }' 00:09:45.309 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.309 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.569 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:45.569 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:45.569 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:45.569 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:45.569 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:45.569 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:45.569 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:45.569 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:45.569 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.569 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.569 [2024-11-21 04:07:45.502676] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.569 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.569 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:45.569 "name": "Existed_Raid", 00:09:45.569 "aliases": [ 00:09:45.569 "fdc083ce-39ff-48c8-a31a-c2d3ec0f878a" 00:09:45.569 ], 00:09:45.569 "product_name": "Raid Volume", 00:09:45.569 "block_size": 512, 00:09:45.569 "num_blocks": 63488, 00:09:45.569 "uuid": "fdc083ce-39ff-48c8-a31a-c2d3ec0f878a", 00:09:45.569 "assigned_rate_limits": { 00:09:45.569 "rw_ios_per_sec": 0, 00:09:45.569 "rw_mbytes_per_sec": 0, 00:09:45.569 "r_mbytes_per_sec": 0, 00:09:45.569 "w_mbytes_per_sec": 0 00:09:45.569 }, 00:09:45.569 "claimed": false, 00:09:45.569 "zoned": false, 00:09:45.569 "supported_io_types": { 00:09:45.569 "read": true, 00:09:45.569 "write": true, 00:09:45.569 "unmap": false, 00:09:45.569 "flush": false, 00:09:45.569 "reset": true, 00:09:45.569 "nvme_admin": false, 00:09:45.569 "nvme_io": false, 00:09:45.569 "nvme_io_md": false, 00:09:45.569 "write_zeroes": true, 00:09:45.569 "zcopy": false, 00:09:45.569 "get_zone_info": false, 00:09:45.569 "zone_management": false, 00:09:45.569 "zone_append": false, 00:09:45.569 "compare": false, 00:09:45.569 "compare_and_write": false, 00:09:45.569 "abort": false, 00:09:45.569 "seek_hole": false, 00:09:45.569 "seek_data": false, 00:09:45.569 "copy": false, 00:09:45.569 "nvme_iov_md": false 00:09:45.569 }, 00:09:45.569 "memory_domains": [ 00:09:45.569 { 00:09:45.569 "dma_device_id": "system", 00:09:45.569 "dma_device_type": 1 00:09:45.569 }, 00:09:45.569 { 00:09:45.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.569 "dma_device_type": 2 00:09:45.569 }, 00:09:45.569 { 00:09:45.569 "dma_device_id": "system", 00:09:45.569 "dma_device_type": 1 00:09:45.569 }, 00:09:45.569 { 00:09:45.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.569 "dma_device_type": 2 00:09:45.569 }, 00:09:45.569 { 00:09:45.569 "dma_device_id": "system", 00:09:45.569 "dma_device_type": 1 00:09:45.569 }, 00:09:45.569 { 00:09:45.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.569 "dma_device_type": 2 00:09:45.569 } 00:09:45.569 ], 00:09:45.569 "driver_specific": { 00:09:45.569 "raid": { 00:09:45.569 "uuid": "fdc083ce-39ff-48c8-a31a-c2d3ec0f878a", 00:09:45.569 "strip_size_kb": 0, 00:09:45.569 "state": "online", 00:09:45.569 "raid_level": "raid1", 00:09:45.569 "superblock": true, 00:09:45.569 "num_base_bdevs": 3, 00:09:45.569 "num_base_bdevs_discovered": 3, 00:09:45.569 "num_base_bdevs_operational": 3, 00:09:45.569 "base_bdevs_list": [ 00:09:45.569 { 00:09:45.569 "name": "NewBaseBdev", 00:09:45.569 "uuid": "7da0140d-8dcd-4cc8-b3fb-17de874f56bf", 00:09:45.569 "is_configured": true, 00:09:45.569 "data_offset": 2048, 00:09:45.569 "data_size": 63488 00:09:45.569 }, 00:09:45.569 { 00:09:45.569 "name": "BaseBdev2", 00:09:45.569 "uuid": "ae1db4d7-51e9-4920-81ab-419d4256c07e", 00:09:45.569 "is_configured": true, 00:09:45.569 "data_offset": 2048, 00:09:45.569 "data_size": 63488 00:09:45.569 }, 00:09:45.569 { 00:09:45.569 "name": "BaseBdev3", 00:09:45.569 "uuid": "478a2d06-a3e7-4851-a395-ec6d83ba5d4a", 00:09:45.569 "is_configured": true, 00:09:45.569 "data_offset": 2048, 00:09:45.569 "data_size": 63488 00:09:45.569 } 00:09:45.569 ] 00:09:45.569 } 00:09:45.569 } 00:09:45.569 }' 00:09:45.828 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:45.828 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:45.828 BaseBdev2 00:09:45.828 BaseBdev3' 00:09:45.828 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.828 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:45.828 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.828 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:45.828 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.828 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.828 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.828 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.828 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.828 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.828 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.828 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.828 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:45.829 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.829 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.829 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.829 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.829 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.829 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.829 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:45.829 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.829 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.829 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.829 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.829 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.829 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.829 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:45.829 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.829 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.829 [2024-11-21 04:07:45.781867] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:45.829 [2024-11-21 04:07:45.781941] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.829 [2024-11-21 04:07:45.782039] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.829 [2024-11-21 04:07:45.782362] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.829 [2024-11-21 04:07:45.782377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:45.829 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.829 04:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 79075 00:09:45.829 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 79075 ']' 00:09:45.829 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 79075 00:09:45.829 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:45.829 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.829 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79075 00:09:46.087 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.087 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.087 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79075' 00:09:46.087 killing process with pid 79075 00:09:46.088 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 79075 00:09:46.088 [2024-11-21 04:07:45.832470] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:46.088 04:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 79075 00:09:46.088 [2024-11-21 04:07:45.893146] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:46.347 04:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:46.347 00:09:46.347 real 0m9.010s 00:09:46.347 user 0m15.057s 00:09:46.347 sys 0m1.991s 00:09:46.347 04:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.347 04:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.347 ************************************ 00:09:46.347 END TEST raid_state_function_test_sb 00:09:46.347 ************************************ 00:09:46.347 04:07:46 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:46.347 04:07:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:46.347 04:07:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.347 04:07:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:46.347 ************************************ 00:09:46.347 START TEST raid_superblock_test 00:09:46.347 ************************************ 00:09:46.347 04:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:09:46.347 04:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:46.347 04:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:46.347 04:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:46.347 04:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:46.347 04:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:46.347 04:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:46.347 04:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:46.347 04:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:46.347 04:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:46.347 04:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:46.347 04:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:46.347 04:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:46.347 04:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:46.347 04:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:46.347 04:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:46.347 04:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79684 00:09:46.347 04:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:46.347 04:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79684 00:09:46.347 04:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 79684 ']' 00:09:46.347 04:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.347 04:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.347 04:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.347 04:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.347 04:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.606 [2024-11-21 04:07:46.381779] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:09:46.606 [2024-11-21 04:07:46.382005] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79684 ] 00:09:46.606 [2024-11-21 04:07:46.537980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.865 [2024-11-21 04:07:46.579267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.865 [2024-11-21 04:07:46.655476] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.865 [2024-11-21 04:07:46.655641] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.434 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.434 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:47.434 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:47.434 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.434 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:47.434 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:47.434 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:47.434 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.435 malloc1 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.435 [2024-11-21 04:07:47.237787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:47.435 [2024-11-21 04:07:47.237852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.435 [2024-11-21 04:07:47.237875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:09:47.435 [2024-11-21 04:07:47.237890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.435 [2024-11-21 04:07:47.240380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.435 [2024-11-21 04:07:47.240419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:47.435 pt1 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.435 malloc2 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.435 [2024-11-21 04:07:47.272285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:47.435 [2024-11-21 04:07:47.272397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.435 [2024-11-21 04:07:47.272432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:47.435 [2024-11-21 04:07:47.272462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.435 [2024-11-21 04:07:47.274870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.435 [2024-11-21 04:07:47.274942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:47.435 pt2 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.435 malloc3 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.435 [2024-11-21 04:07:47.311133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:47.435 [2024-11-21 04:07:47.311268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.435 [2024-11-21 04:07:47.311316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:47.435 [2024-11-21 04:07:47.311385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.435 [2024-11-21 04:07:47.314002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.435 [2024-11-21 04:07:47.314083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:47.435 pt3 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.435 [2024-11-21 04:07:47.323160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:47.435 [2024-11-21 04:07:47.325454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:47.435 [2024-11-21 04:07:47.325513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:47.435 [2024-11-21 04:07:47.325664] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:09:47.435 [2024-11-21 04:07:47.325676] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:47.435 [2024-11-21 04:07:47.325954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:47.435 [2024-11-21 04:07:47.326097] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:09:47.435 [2024-11-21 04:07:47.326109] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:09:47.435 [2024-11-21 04:07:47.326245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.435 "name": "raid_bdev1", 00:09:47.435 "uuid": "4cabd615-225f-496d-89e5-ba0e8ccd37e4", 00:09:47.435 "strip_size_kb": 0, 00:09:47.435 "state": "online", 00:09:47.435 "raid_level": "raid1", 00:09:47.435 "superblock": true, 00:09:47.435 "num_base_bdevs": 3, 00:09:47.435 "num_base_bdevs_discovered": 3, 00:09:47.435 "num_base_bdevs_operational": 3, 00:09:47.435 "base_bdevs_list": [ 00:09:47.435 { 00:09:47.435 "name": "pt1", 00:09:47.435 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.435 "is_configured": true, 00:09:47.435 "data_offset": 2048, 00:09:47.435 "data_size": 63488 00:09:47.435 }, 00:09:47.435 { 00:09:47.435 "name": "pt2", 00:09:47.435 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.435 "is_configured": true, 00:09:47.435 "data_offset": 2048, 00:09:47.435 "data_size": 63488 00:09:47.435 }, 00:09:47.435 { 00:09:47.435 "name": "pt3", 00:09:47.435 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:47.435 "is_configured": true, 00:09:47.435 "data_offset": 2048, 00:09:47.435 "data_size": 63488 00:09:47.435 } 00:09:47.435 ] 00:09:47.435 }' 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.435 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.005 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:48.005 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:48.005 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:48.005 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:48.005 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:48.005 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:48.005 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:48.005 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:48.005 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.005 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.005 [2024-11-21 04:07:47.766731] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.005 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.005 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:48.005 "name": "raid_bdev1", 00:09:48.005 "aliases": [ 00:09:48.005 "4cabd615-225f-496d-89e5-ba0e8ccd37e4" 00:09:48.005 ], 00:09:48.005 "product_name": "Raid Volume", 00:09:48.005 "block_size": 512, 00:09:48.005 "num_blocks": 63488, 00:09:48.005 "uuid": "4cabd615-225f-496d-89e5-ba0e8ccd37e4", 00:09:48.005 "assigned_rate_limits": { 00:09:48.005 "rw_ios_per_sec": 0, 00:09:48.005 "rw_mbytes_per_sec": 0, 00:09:48.005 "r_mbytes_per_sec": 0, 00:09:48.005 "w_mbytes_per_sec": 0 00:09:48.005 }, 00:09:48.005 "claimed": false, 00:09:48.005 "zoned": false, 00:09:48.005 "supported_io_types": { 00:09:48.005 "read": true, 00:09:48.005 "write": true, 00:09:48.005 "unmap": false, 00:09:48.005 "flush": false, 00:09:48.005 "reset": true, 00:09:48.005 "nvme_admin": false, 00:09:48.005 "nvme_io": false, 00:09:48.005 "nvme_io_md": false, 00:09:48.005 "write_zeroes": true, 00:09:48.005 "zcopy": false, 00:09:48.005 "get_zone_info": false, 00:09:48.005 "zone_management": false, 00:09:48.005 "zone_append": false, 00:09:48.005 "compare": false, 00:09:48.005 "compare_and_write": false, 00:09:48.005 "abort": false, 00:09:48.005 "seek_hole": false, 00:09:48.005 "seek_data": false, 00:09:48.005 "copy": false, 00:09:48.005 "nvme_iov_md": false 00:09:48.005 }, 00:09:48.005 "memory_domains": [ 00:09:48.005 { 00:09:48.005 "dma_device_id": "system", 00:09:48.005 "dma_device_type": 1 00:09:48.005 }, 00:09:48.005 { 00:09:48.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.005 "dma_device_type": 2 00:09:48.005 }, 00:09:48.005 { 00:09:48.005 "dma_device_id": "system", 00:09:48.005 "dma_device_type": 1 00:09:48.005 }, 00:09:48.005 { 00:09:48.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.005 "dma_device_type": 2 00:09:48.005 }, 00:09:48.005 { 00:09:48.005 "dma_device_id": "system", 00:09:48.005 "dma_device_type": 1 00:09:48.005 }, 00:09:48.005 { 00:09:48.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.005 "dma_device_type": 2 00:09:48.005 } 00:09:48.005 ], 00:09:48.005 "driver_specific": { 00:09:48.005 "raid": { 00:09:48.005 "uuid": "4cabd615-225f-496d-89e5-ba0e8ccd37e4", 00:09:48.005 "strip_size_kb": 0, 00:09:48.005 "state": "online", 00:09:48.005 "raid_level": "raid1", 00:09:48.005 "superblock": true, 00:09:48.005 "num_base_bdevs": 3, 00:09:48.005 "num_base_bdevs_discovered": 3, 00:09:48.005 "num_base_bdevs_operational": 3, 00:09:48.005 "base_bdevs_list": [ 00:09:48.005 { 00:09:48.005 "name": "pt1", 00:09:48.005 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.005 "is_configured": true, 00:09:48.005 "data_offset": 2048, 00:09:48.005 "data_size": 63488 00:09:48.005 }, 00:09:48.005 { 00:09:48.005 "name": "pt2", 00:09:48.005 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.005 "is_configured": true, 00:09:48.005 "data_offset": 2048, 00:09:48.005 "data_size": 63488 00:09:48.005 }, 00:09:48.005 { 00:09:48.005 "name": "pt3", 00:09:48.005 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.005 "is_configured": true, 00:09:48.005 "data_offset": 2048, 00:09:48.005 "data_size": 63488 00:09:48.005 } 00:09:48.005 ] 00:09:48.005 } 00:09:48.005 } 00:09:48.005 }' 00:09:48.005 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:48.005 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:48.005 pt2 00:09:48.005 pt3' 00:09:48.005 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.005 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:48.005 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.005 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.005 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:48.005 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.005 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.006 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.006 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.006 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.006 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.006 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:48.006 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.006 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.006 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.006 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.006 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.006 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.006 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.006 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:48.006 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.006 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.006 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.006 04:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.265 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.265 04:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.265 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:48.265 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.265 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.265 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:48.265 [2024-11-21 04:07:48.010248] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.265 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.265 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4cabd615-225f-496d-89e5-ba0e8ccd37e4 00:09:48.265 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4cabd615-225f-496d-89e5-ba0e8ccd37e4 ']' 00:09:48.265 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:48.265 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.265 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.265 [2024-11-21 04:07:48.045896] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.265 [2024-11-21 04:07:48.045967] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.265 [2024-11-21 04:07:48.046094] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.265 [2024-11-21 04:07:48.046269] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.265 [2024-11-21 04:07:48.046339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:09:48.265 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.265 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.265 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.265 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:48.265 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.265 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.265 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.266 [2024-11-21 04:07:48.201627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:48.266 [2024-11-21 04:07:48.203854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:48.266 [2024-11-21 04:07:48.203902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:48.266 [2024-11-21 04:07:48.203955] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:48.266 [2024-11-21 04:07:48.204025] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:48.266 [2024-11-21 04:07:48.204044] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:48.266 [2024-11-21 04:07:48.204057] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.266 [2024-11-21 04:07:48.204067] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:09:48.266 request: 00:09:48.266 { 00:09:48.266 "name": "raid_bdev1", 00:09:48.266 "raid_level": "raid1", 00:09:48.266 "base_bdevs": [ 00:09:48.266 "malloc1", 00:09:48.266 "malloc2", 00:09:48.266 "malloc3" 00:09:48.266 ], 00:09:48.266 "superblock": false, 00:09:48.266 "method": "bdev_raid_create", 00:09:48.266 "req_id": 1 00:09:48.266 } 00:09:48.266 Got JSON-RPC error response 00:09:48.266 response: 00:09:48.266 { 00:09:48.266 "code": -17, 00:09:48.266 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:48.266 } 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:48.266 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.525 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:48.525 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:48.525 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:48.525 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.525 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.525 [2024-11-21 04:07:48.269487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:48.525 [2024-11-21 04:07:48.269593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.525 [2024-11-21 04:07:48.269626] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:48.525 [2024-11-21 04:07:48.269656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.525 [2024-11-21 04:07:48.272083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.525 [2024-11-21 04:07:48.272156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:48.525 [2024-11-21 04:07:48.272258] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:48.525 [2024-11-21 04:07:48.272334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:48.525 pt1 00:09:48.525 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.525 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:48.525 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.525 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.525 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.525 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.525 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.525 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.525 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.525 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.525 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.525 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.525 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.525 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.525 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.525 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.525 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.525 "name": "raid_bdev1", 00:09:48.525 "uuid": "4cabd615-225f-496d-89e5-ba0e8ccd37e4", 00:09:48.525 "strip_size_kb": 0, 00:09:48.525 "state": "configuring", 00:09:48.525 "raid_level": "raid1", 00:09:48.525 "superblock": true, 00:09:48.525 "num_base_bdevs": 3, 00:09:48.525 "num_base_bdevs_discovered": 1, 00:09:48.525 "num_base_bdevs_operational": 3, 00:09:48.525 "base_bdevs_list": [ 00:09:48.525 { 00:09:48.525 "name": "pt1", 00:09:48.525 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.525 "is_configured": true, 00:09:48.525 "data_offset": 2048, 00:09:48.525 "data_size": 63488 00:09:48.525 }, 00:09:48.525 { 00:09:48.525 "name": null, 00:09:48.525 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.525 "is_configured": false, 00:09:48.525 "data_offset": 2048, 00:09:48.525 "data_size": 63488 00:09:48.525 }, 00:09:48.525 { 00:09:48.525 "name": null, 00:09:48.525 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.525 "is_configured": false, 00:09:48.525 "data_offset": 2048, 00:09:48.525 "data_size": 63488 00:09:48.525 } 00:09:48.525 ] 00:09:48.525 }' 00:09:48.525 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.525 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.785 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:48.785 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:48.785 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.785 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.785 [2024-11-21 04:07:48.712807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:48.785 [2024-11-21 04:07:48.712887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.785 [2024-11-21 04:07:48.712912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:48.785 [2024-11-21 04:07:48.712927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.785 [2024-11-21 04:07:48.713402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.785 [2024-11-21 04:07:48.713424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:48.785 [2024-11-21 04:07:48.713508] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:48.785 [2024-11-21 04:07:48.713541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:48.785 pt2 00:09:48.785 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.785 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:48.785 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.785 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.785 [2024-11-21 04:07:48.724797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:48.785 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.785 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:48.785 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.785 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.785 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.785 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.785 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.785 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.785 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.785 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.785 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.785 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.785 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.785 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.785 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.785 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.044 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.044 "name": "raid_bdev1", 00:09:49.044 "uuid": "4cabd615-225f-496d-89e5-ba0e8ccd37e4", 00:09:49.044 "strip_size_kb": 0, 00:09:49.044 "state": "configuring", 00:09:49.044 "raid_level": "raid1", 00:09:49.044 "superblock": true, 00:09:49.044 "num_base_bdevs": 3, 00:09:49.044 "num_base_bdevs_discovered": 1, 00:09:49.044 "num_base_bdevs_operational": 3, 00:09:49.044 "base_bdevs_list": [ 00:09:49.044 { 00:09:49.044 "name": "pt1", 00:09:49.044 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:49.044 "is_configured": true, 00:09:49.044 "data_offset": 2048, 00:09:49.044 "data_size": 63488 00:09:49.044 }, 00:09:49.044 { 00:09:49.044 "name": null, 00:09:49.044 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.044 "is_configured": false, 00:09:49.044 "data_offset": 0, 00:09:49.044 "data_size": 63488 00:09:49.044 }, 00:09:49.044 { 00:09:49.044 "name": null, 00:09:49.044 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:49.044 "is_configured": false, 00:09:49.044 "data_offset": 2048, 00:09:49.044 "data_size": 63488 00:09:49.044 } 00:09:49.044 ] 00:09:49.044 }' 00:09:49.044 04:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.044 04:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.303 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:49.303 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:49.303 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:49.303 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.303 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.303 [2024-11-21 04:07:49.156048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:49.303 [2024-11-21 04:07:49.156127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.303 [2024-11-21 04:07:49.156151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:49.303 [2024-11-21 04:07:49.156161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.303 [2024-11-21 04:07:49.156688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.303 [2024-11-21 04:07:49.156718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:49.303 [2024-11-21 04:07:49.156815] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:49.303 [2024-11-21 04:07:49.156840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:49.303 pt2 00:09:49.303 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.303 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:49.303 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:49.303 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:49.304 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.304 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.304 [2024-11-21 04:07:49.167990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:49.304 [2024-11-21 04:07:49.168047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.304 [2024-11-21 04:07:49.168068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:49.304 [2024-11-21 04:07:49.168076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.304 [2024-11-21 04:07:49.168467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.304 [2024-11-21 04:07:49.168507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:49.304 [2024-11-21 04:07:49.168574] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:49.304 [2024-11-21 04:07:49.168598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:49.304 [2024-11-21 04:07:49.168754] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:49.304 [2024-11-21 04:07:49.168770] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:49.304 [2024-11-21 04:07:49.169034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:49.304 [2024-11-21 04:07:49.169148] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:49.304 [2024-11-21 04:07:49.169160] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:09:49.304 [2024-11-21 04:07:49.169288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.304 pt3 00:09:49.304 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.304 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:49.304 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:49.304 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:49.304 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.304 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.304 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.304 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.304 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.304 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.304 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.304 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.304 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.304 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.304 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.304 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.304 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.304 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.304 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.304 "name": "raid_bdev1", 00:09:49.304 "uuid": "4cabd615-225f-496d-89e5-ba0e8ccd37e4", 00:09:49.304 "strip_size_kb": 0, 00:09:49.304 "state": "online", 00:09:49.304 "raid_level": "raid1", 00:09:49.304 "superblock": true, 00:09:49.304 "num_base_bdevs": 3, 00:09:49.304 "num_base_bdevs_discovered": 3, 00:09:49.304 "num_base_bdevs_operational": 3, 00:09:49.304 "base_bdevs_list": [ 00:09:49.304 { 00:09:49.304 "name": "pt1", 00:09:49.304 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:49.304 "is_configured": true, 00:09:49.304 "data_offset": 2048, 00:09:49.304 "data_size": 63488 00:09:49.304 }, 00:09:49.304 { 00:09:49.304 "name": "pt2", 00:09:49.304 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.304 "is_configured": true, 00:09:49.304 "data_offset": 2048, 00:09:49.304 "data_size": 63488 00:09:49.304 }, 00:09:49.304 { 00:09:49.304 "name": "pt3", 00:09:49.304 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:49.304 "is_configured": true, 00:09:49.304 "data_offset": 2048, 00:09:49.304 "data_size": 63488 00:09:49.304 } 00:09:49.304 ] 00:09:49.304 }' 00:09:49.304 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.304 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.870 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:49.870 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.871 [2024-11-21 04:07:49.583679] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:49.871 "name": "raid_bdev1", 00:09:49.871 "aliases": [ 00:09:49.871 "4cabd615-225f-496d-89e5-ba0e8ccd37e4" 00:09:49.871 ], 00:09:49.871 "product_name": "Raid Volume", 00:09:49.871 "block_size": 512, 00:09:49.871 "num_blocks": 63488, 00:09:49.871 "uuid": "4cabd615-225f-496d-89e5-ba0e8ccd37e4", 00:09:49.871 "assigned_rate_limits": { 00:09:49.871 "rw_ios_per_sec": 0, 00:09:49.871 "rw_mbytes_per_sec": 0, 00:09:49.871 "r_mbytes_per_sec": 0, 00:09:49.871 "w_mbytes_per_sec": 0 00:09:49.871 }, 00:09:49.871 "claimed": false, 00:09:49.871 "zoned": false, 00:09:49.871 "supported_io_types": { 00:09:49.871 "read": true, 00:09:49.871 "write": true, 00:09:49.871 "unmap": false, 00:09:49.871 "flush": false, 00:09:49.871 "reset": true, 00:09:49.871 "nvme_admin": false, 00:09:49.871 "nvme_io": false, 00:09:49.871 "nvme_io_md": false, 00:09:49.871 "write_zeroes": true, 00:09:49.871 "zcopy": false, 00:09:49.871 "get_zone_info": false, 00:09:49.871 "zone_management": false, 00:09:49.871 "zone_append": false, 00:09:49.871 "compare": false, 00:09:49.871 "compare_and_write": false, 00:09:49.871 "abort": false, 00:09:49.871 "seek_hole": false, 00:09:49.871 "seek_data": false, 00:09:49.871 "copy": false, 00:09:49.871 "nvme_iov_md": false 00:09:49.871 }, 00:09:49.871 "memory_domains": [ 00:09:49.871 { 00:09:49.871 "dma_device_id": "system", 00:09:49.871 "dma_device_type": 1 00:09:49.871 }, 00:09:49.871 { 00:09:49.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.871 "dma_device_type": 2 00:09:49.871 }, 00:09:49.871 { 00:09:49.871 "dma_device_id": "system", 00:09:49.871 "dma_device_type": 1 00:09:49.871 }, 00:09:49.871 { 00:09:49.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.871 "dma_device_type": 2 00:09:49.871 }, 00:09:49.871 { 00:09:49.871 "dma_device_id": "system", 00:09:49.871 "dma_device_type": 1 00:09:49.871 }, 00:09:49.871 { 00:09:49.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.871 "dma_device_type": 2 00:09:49.871 } 00:09:49.871 ], 00:09:49.871 "driver_specific": { 00:09:49.871 "raid": { 00:09:49.871 "uuid": "4cabd615-225f-496d-89e5-ba0e8ccd37e4", 00:09:49.871 "strip_size_kb": 0, 00:09:49.871 "state": "online", 00:09:49.871 "raid_level": "raid1", 00:09:49.871 "superblock": true, 00:09:49.871 "num_base_bdevs": 3, 00:09:49.871 "num_base_bdevs_discovered": 3, 00:09:49.871 "num_base_bdevs_operational": 3, 00:09:49.871 "base_bdevs_list": [ 00:09:49.871 { 00:09:49.871 "name": "pt1", 00:09:49.871 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:49.871 "is_configured": true, 00:09:49.871 "data_offset": 2048, 00:09:49.871 "data_size": 63488 00:09:49.871 }, 00:09:49.871 { 00:09:49.871 "name": "pt2", 00:09:49.871 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.871 "is_configured": true, 00:09:49.871 "data_offset": 2048, 00:09:49.871 "data_size": 63488 00:09:49.871 }, 00:09:49.871 { 00:09:49.871 "name": "pt3", 00:09:49.871 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:49.871 "is_configured": true, 00:09:49.871 "data_offset": 2048, 00:09:49.871 "data_size": 63488 00:09:49.871 } 00:09:49.871 ] 00:09:49.871 } 00:09:49.871 } 00:09:49.871 }' 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:49.871 pt2 00:09:49.871 pt3' 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.871 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.131 [2024-11-21 04:07:49.843155] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.131 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.131 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4cabd615-225f-496d-89e5-ba0e8ccd37e4 '!=' 4cabd615-225f-496d-89e5-ba0e8ccd37e4 ']' 00:09:50.131 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:50.131 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:50.131 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:50.131 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:50.131 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.131 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.132 [2024-11-21 04:07:49.886865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:50.132 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.132 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:50.132 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.132 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.132 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.132 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.132 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:50.132 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.132 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.132 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.132 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.132 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.132 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.132 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.132 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.132 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.132 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.132 "name": "raid_bdev1", 00:09:50.132 "uuid": "4cabd615-225f-496d-89e5-ba0e8ccd37e4", 00:09:50.132 "strip_size_kb": 0, 00:09:50.132 "state": "online", 00:09:50.132 "raid_level": "raid1", 00:09:50.132 "superblock": true, 00:09:50.132 "num_base_bdevs": 3, 00:09:50.132 "num_base_bdevs_discovered": 2, 00:09:50.132 "num_base_bdevs_operational": 2, 00:09:50.132 "base_bdevs_list": [ 00:09:50.132 { 00:09:50.132 "name": null, 00:09:50.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.132 "is_configured": false, 00:09:50.132 "data_offset": 0, 00:09:50.132 "data_size": 63488 00:09:50.132 }, 00:09:50.132 { 00:09:50.132 "name": "pt2", 00:09:50.132 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.132 "is_configured": true, 00:09:50.132 "data_offset": 2048, 00:09:50.132 "data_size": 63488 00:09:50.132 }, 00:09:50.132 { 00:09:50.132 "name": "pt3", 00:09:50.132 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:50.132 "is_configured": true, 00:09:50.132 "data_offset": 2048, 00:09:50.132 "data_size": 63488 00:09:50.132 } 00:09:50.132 ] 00:09:50.132 }' 00:09:50.132 04:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.132 04:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.391 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:50.391 04:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.391 04:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.391 [2024-11-21 04:07:50.310077] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:50.391 [2024-11-21 04:07:50.310162] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:50.391 [2024-11-21 04:07:50.310292] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.391 [2024-11-21 04:07:50.310419] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.391 [2024-11-21 04:07:50.310464] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:09:50.391 04:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.391 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.391 04:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.391 04:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.391 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:50.391 04:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.650 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:50.650 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:50.650 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:50.650 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:50.650 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:50.650 04:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.650 04:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.650 04:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.650 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:50.650 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:50.650 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:50.650 04:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.650 04:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.650 04:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.650 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:50.650 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:50.650 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:50.651 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:50.651 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:50.651 04:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.651 04:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.651 [2024-11-21 04:07:50.393876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:50.651 [2024-11-21 04:07:50.393926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.651 [2024-11-21 04:07:50.393946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:09:50.651 [2024-11-21 04:07:50.393955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.651 [2024-11-21 04:07:50.396596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.651 [2024-11-21 04:07:50.396631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:50.651 [2024-11-21 04:07:50.396710] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:50.651 [2024-11-21 04:07:50.396746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:50.651 pt2 00:09:50.651 04:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.651 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:50.651 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.651 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.651 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.651 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.651 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:50.651 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.651 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.651 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.651 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.651 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.651 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.651 04:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.651 04:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.651 04:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.651 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.651 "name": "raid_bdev1", 00:09:50.651 "uuid": "4cabd615-225f-496d-89e5-ba0e8ccd37e4", 00:09:50.651 "strip_size_kb": 0, 00:09:50.651 "state": "configuring", 00:09:50.651 "raid_level": "raid1", 00:09:50.651 "superblock": true, 00:09:50.651 "num_base_bdevs": 3, 00:09:50.651 "num_base_bdevs_discovered": 1, 00:09:50.651 "num_base_bdevs_operational": 2, 00:09:50.651 "base_bdevs_list": [ 00:09:50.651 { 00:09:50.651 "name": null, 00:09:50.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.651 "is_configured": false, 00:09:50.651 "data_offset": 2048, 00:09:50.651 "data_size": 63488 00:09:50.651 }, 00:09:50.651 { 00:09:50.651 "name": "pt2", 00:09:50.651 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.651 "is_configured": true, 00:09:50.651 "data_offset": 2048, 00:09:50.651 "data_size": 63488 00:09:50.651 }, 00:09:50.651 { 00:09:50.651 "name": null, 00:09:50.651 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:50.651 "is_configured": false, 00:09:50.651 "data_offset": 2048, 00:09:50.651 "data_size": 63488 00:09:50.651 } 00:09:50.651 ] 00:09:50.651 }' 00:09:50.651 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.651 04:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.910 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:50.911 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:50.911 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:50.911 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:50.911 04:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.911 04:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.911 [2024-11-21 04:07:50.829220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:50.911 [2024-11-21 04:07:50.829369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.911 [2024-11-21 04:07:50.829414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:50.911 [2024-11-21 04:07:50.829459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.911 [2024-11-21 04:07:50.830013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.911 [2024-11-21 04:07:50.830071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:50.911 [2024-11-21 04:07:50.830210] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:50.911 [2024-11-21 04:07:50.830292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:50.911 [2024-11-21 04:07:50.830451] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:50.911 [2024-11-21 04:07:50.830489] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:50.911 [2024-11-21 04:07:50.830803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:09:50.911 [2024-11-21 04:07:50.830989] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:50.911 [2024-11-21 04:07:50.831038] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:09:50.911 [2024-11-21 04:07:50.831235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.911 pt3 00:09:50.911 04:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.911 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:50.911 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.911 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.911 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.911 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.911 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:50.911 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.911 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.911 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.911 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.911 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.911 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.911 04:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.911 04:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.911 04:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.911 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.911 "name": "raid_bdev1", 00:09:50.911 "uuid": "4cabd615-225f-496d-89e5-ba0e8ccd37e4", 00:09:50.911 "strip_size_kb": 0, 00:09:50.911 "state": "online", 00:09:50.911 "raid_level": "raid1", 00:09:50.911 "superblock": true, 00:09:50.911 "num_base_bdevs": 3, 00:09:50.911 "num_base_bdevs_discovered": 2, 00:09:50.911 "num_base_bdevs_operational": 2, 00:09:50.911 "base_bdevs_list": [ 00:09:50.911 { 00:09:50.911 "name": null, 00:09:50.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.911 "is_configured": false, 00:09:50.911 "data_offset": 2048, 00:09:50.911 "data_size": 63488 00:09:50.911 }, 00:09:50.911 { 00:09:50.911 "name": "pt2", 00:09:50.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.911 "is_configured": true, 00:09:50.911 "data_offset": 2048, 00:09:50.911 "data_size": 63488 00:09:50.911 }, 00:09:50.911 { 00:09:50.911 "name": "pt3", 00:09:50.911 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:50.911 "is_configured": true, 00:09:50.911 "data_offset": 2048, 00:09:50.911 "data_size": 63488 00:09:50.911 } 00:09:50.911 ] 00:09:50.911 }' 00:09:50.911 04:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.911 04:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.512 [2024-11-21 04:07:51.236509] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.512 [2024-11-21 04:07:51.236602] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.512 [2024-11-21 04:07:51.236724] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.512 [2024-11-21 04:07:51.236857] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.512 [2024-11-21 04:07:51.236907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.512 [2024-11-21 04:07:51.292389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:51.512 [2024-11-21 04:07:51.292462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.512 [2024-11-21 04:07:51.292481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:51.512 [2024-11-21 04:07:51.292493] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.512 [2024-11-21 04:07:51.295076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.512 [2024-11-21 04:07:51.295116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:51.512 [2024-11-21 04:07:51.295199] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:51.512 [2024-11-21 04:07:51.295266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:51.512 [2024-11-21 04:07:51.295399] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:51.512 [2024-11-21 04:07:51.295414] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.512 [2024-11-21 04:07:51.295430] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:09:51.512 [2024-11-21 04:07:51.295471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:51.512 pt1 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.512 "name": "raid_bdev1", 00:09:51.512 "uuid": "4cabd615-225f-496d-89e5-ba0e8ccd37e4", 00:09:51.512 "strip_size_kb": 0, 00:09:51.512 "state": "configuring", 00:09:51.512 "raid_level": "raid1", 00:09:51.512 "superblock": true, 00:09:51.512 "num_base_bdevs": 3, 00:09:51.512 "num_base_bdevs_discovered": 1, 00:09:51.512 "num_base_bdevs_operational": 2, 00:09:51.512 "base_bdevs_list": [ 00:09:51.512 { 00:09:51.512 "name": null, 00:09:51.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.512 "is_configured": false, 00:09:51.512 "data_offset": 2048, 00:09:51.512 "data_size": 63488 00:09:51.512 }, 00:09:51.512 { 00:09:51.512 "name": "pt2", 00:09:51.512 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:51.512 "is_configured": true, 00:09:51.512 "data_offset": 2048, 00:09:51.512 "data_size": 63488 00:09:51.512 }, 00:09:51.512 { 00:09:51.512 "name": null, 00:09:51.512 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:51.512 "is_configured": false, 00:09:51.512 "data_offset": 2048, 00:09:51.512 "data_size": 63488 00:09:51.512 } 00:09:51.512 ] 00:09:51.512 }' 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.512 04:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.772 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:51.772 04:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.772 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:51.772 04:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.031 04:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.031 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:52.031 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:52.031 04:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.031 04:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.031 [2024-11-21 04:07:51.811494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:52.031 [2024-11-21 04:07:51.811621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.031 [2024-11-21 04:07:51.811659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:09:52.031 [2024-11-21 04:07:51.811689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.031 [2024-11-21 04:07:51.812274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.031 [2024-11-21 04:07:51.812344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:52.031 [2024-11-21 04:07:51.812479] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:52.031 [2024-11-21 04:07:51.812540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:52.031 [2024-11-21 04:07:51.812703] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:09:52.031 [2024-11-21 04:07:51.812747] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:52.031 [2024-11-21 04:07:51.813037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:09:52.032 [2024-11-21 04:07:51.813247] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:09:52.032 [2024-11-21 04:07:51.813293] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:09:52.032 [2024-11-21 04:07:51.813506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.032 pt3 00:09:52.032 04:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.032 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:52.032 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.032 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.032 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.032 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.032 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:52.032 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.032 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.032 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.032 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.032 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.032 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.032 04:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.032 04:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.032 04:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.032 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.032 "name": "raid_bdev1", 00:09:52.032 "uuid": "4cabd615-225f-496d-89e5-ba0e8ccd37e4", 00:09:52.032 "strip_size_kb": 0, 00:09:52.032 "state": "online", 00:09:52.032 "raid_level": "raid1", 00:09:52.032 "superblock": true, 00:09:52.032 "num_base_bdevs": 3, 00:09:52.032 "num_base_bdevs_discovered": 2, 00:09:52.032 "num_base_bdevs_operational": 2, 00:09:52.032 "base_bdevs_list": [ 00:09:52.032 { 00:09:52.032 "name": null, 00:09:52.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.032 "is_configured": false, 00:09:52.032 "data_offset": 2048, 00:09:52.032 "data_size": 63488 00:09:52.032 }, 00:09:52.032 { 00:09:52.032 "name": "pt2", 00:09:52.032 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:52.032 "is_configured": true, 00:09:52.032 "data_offset": 2048, 00:09:52.032 "data_size": 63488 00:09:52.032 }, 00:09:52.032 { 00:09:52.032 "name": "pt3", 00:09:52.032 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:52.032 "is_configured": true, 00:09:52.032 "data_offset": 2048, 00:09:52.032 "data_size": 63488 00:09:52.032 } 00:09:52.032 ] 00:09:52.032 }' 00:09:52.032 04:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.032 04:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.291 04:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:52.291 04:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.291 04:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:52.291 04:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.552 04:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.552 04:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:52.552 04:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:52.552 04:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:52.552 04:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.552 04:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.552 [2024-11-21 04:07:52.306910] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:52.552 04:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.552 04:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4cabd615-225f-496d-89e5-ba0e8ccd37e4 '!=' 4cabd615-225f-496d-89e5-ba0e8ccd37e4 ']' 00:09:52.552 04:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79684 00:09:52.552 04:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 79684 ']' 00:09:52.552 04:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 79684 00:09:52.552 04:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:52.552 04:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.552 04:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79684 00:09:52.552 killing process with pid 79684 00:09:52.552 04:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.552 04:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.552 04:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79684' 00:09:52.552 04:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 79684 00:09:52.552 [2024-11-21 04:07:52.383061] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:52.552 [2024-11-21 04:07:52.383165] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.552 04:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 79684 00:09:52.552 [2024-11-21 04:07:52.383254] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.552 [2024-11-21 04:07:52.383265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:09:52.552 [2024-11-21 04:07:52.446232] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:52.812 04:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:52.812 00:09:52.812 real 0m6.480s 00:09:52.812 user 0m10.631s 00:09:52.812 sys 0m1.439s 00:09:52.812 04:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.812 ************************************ 00:09:52.812 END TEST raid_superblock_test 00:09:52.812 ************************************ 00:09:52.812 04:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.072 04:07:52 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:53.072 04:07:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:53.072 04:07:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.072 04:07:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:53.072 ************************************ 00:09:53.072 START TEST raid_read_error_test 00:09:53.072 ************************************ 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.j1xxuxri4y 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80119 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80119 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 80119 ']' 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.072 04:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:53.072 [2024-11-21 04:07:52.943381] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:09:53.072 [2024-11-21 04:07:52.943512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80119 ] 00:09:53.332 [2024-11-21 04:07:53.100611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.332 [2024-11-21 04:07:53.142050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.332 [2024-11-21 04:07:53.217992] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.332 [2024-11-21 04:07:53.218041] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.902 BaseBdev1_malloc 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.902 true 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.902 [2024-11-21 04:07:53.823639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:53.902 [2024-11-21 04:07:53.823742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.902 [2024-11-21 04:07:53.823775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:53.902 [2024-11-21 04:07:53.823784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.902 [2024-11-21 04:07:53.826352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.902 [2024-11-21 04:07:53.826391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:53.902 BaseBdev1 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.902 BaseBdev2_malloc 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.902 true 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.902 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.902 [2024-11-21 04:07:53.870398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:53.902 [2024-11-21 04:07:53.870450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.902 [2024-11-21 04:07:53.870471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:53.902 [2024-11-21 04:07:53.870490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.902 [2024-11-21 04:07:53.872961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.902 [2024-11-21 04:07:53.873000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:54.163 BaseBdev2 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.163 BaseBdev3_malloc 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.163 true 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.163 [2024-11-21 04:07:53.916894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:54.163 [2024-11-21 04:07:53.916985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.163 [2024-11-21 04:07:53.917010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:54.163 [2024-11-21 04:07:53.917019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.163 [2024-11-21 04:07:53.919487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.163 [2024-11-21 04:07:53.919519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:54.163 BaseBdev3 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.163 [2024-11-21 04:07:53.928958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.163 [2024-11-21 04:07:53.931110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:54.163 [2024-11-21 04:07:53.931247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:54.163 [2024-11-21 04:07:53.931455] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:54.163 [2024-11-21 04:07:53.931472] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:54.163 [2024-11-21 04:07:53.931727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:09:54.163 [2024-11-21 04:07:53.931869] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:54.163 [2024-11-21 04:07:53.931878] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:09:54.163 [2024-11-21 04:07:53.932007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.163 "name": "raid_bdev1", 00:09:54.163 "uuid": "92b9e08b-0e7a-4da5-b658-496b7a6dd586", 00:09:54.163 "strip_size_kb": 0, 00:09:54.163 "state": "online", 00:09:54.163 "raid_level": "raid1", 00:09:54.163 "superblock": true, 00:09:54.163 "num_base_bdevs": 3, 00:09:54.163 "num_base_bdevs_discovered": 3, 00:09:54.163 "num_base_bdevs_operational": 3, 00:09:54.163 "base_bdevs_list": [ 00:09:54.163 { 00:09:54.163 "name": "BaseBdev1", 00:09:54.163 "uuid": "4dce6010-dc8a-54b0-84dd-9f180acbbe33", 00:09:54.163 "is_configured": true, 00:09:54.163 "data_offset": 2048, 00:09:54.163 "data_size": 63488 00:09:54.163 }, 00:09:54.163 { 00:09:54.163 "name": "BaseBdev2", 00:09:54.163 "uuid": "af6f8269-1a90-5267-9629-ab33d33f0e1d", 00:09:54.163 "is_configured": true, 00:09:54.163 "data_offset": 2048, 00:09:54.163 "data_size": 63488 00:09:54.163 }, 00:09:54.163 { 00:09:54.163 "name": "BaseBdev3", 00:09:54.163 "uuid": "f38d326f-fd90-59c5-92ef-bb8ca066b555", 00:09:54.163 "is_configured": true, 00:09:54.163 "data_offset": 2048, 00:09:54.163 "data_size": 63488 00:09:54.163 } 00:09:54.163 ] 00:09:54.163 }' 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.163 04:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.422 04:07:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:54.422 04:07:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:54.681 [2024-11-21 04:07:54.484596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002d50 00:09:55.619 04:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:55.619 04:07:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.619 04:07:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.619 04:07:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.619 04:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:55.619 04:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:55.619 04:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:55.619 04:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:55.619 04:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:55.619 04:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.619 04:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.619 04:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.619 04:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.619 04:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.619 04:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.619 04:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.619 04:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.619 04:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.619 04:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.619 04:07:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.619 04:07:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.619 04:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.619 04:07:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.619 04:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.619 "name": "raid_bdev1", 00:09:55.619 "uuid": "92b9e08b-0e7a-4da5-b658-496b7a6dd586", 00:09:55.619 "strip_size_kb": 0, 00:09:55.619 "state": "online", 00:09:55.619 "raid_level": "raid1", 00:09:55.619 "superblock": true, 00:09:55.619 "num_base_bdevs": 3, 00:09:55.619 "num_base_bdevs_discovered": 3, 00:09:55.619 "num_base_bdevs_operational": 3, 00:09:55.619 "base_bdevs_list": [ 00:09:55.619 { 00:09:55.619 "name": "BaseBdev1", 00:09:55.619 "uuid": "4dce6010-dc8a-54b0-84dd-9f180acbbe33", 00:09:55.619 "is_configured": true, 00:09:55.619 "data_offset": 2048, 00:09:55.619 "data_size": 63488 00:09:55.619 }, 00:09:55.619 { 00:09:55.619 "name": "BaseBdev2", 00:09:55.619 "uuid": "af6f8269-1a90-5267-9629-ab33d33f0e1d", 00:09:55.619 "is_configured": true, 00:09:55.619 "data_offset": 2048, 00:09:55.619 "data_size": 63488 00:09:55.619 }, 00:09:55.619 { 00:09:55.619 "name": "BaseBdev3", 00:09:55.619 "uuid": "f38d326f-fd90-59c5-92ef-bb8ca066b555", 00:09:55.619 "is_configured": true, 00:09:55.619 "data_offset": 2048, 00:09:55.619 "data_size": 63488 00:09:55.619 } 00:09:55.619 ] 00:09:55.619 }' 00:09:55.619 04:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.619 04:07:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.189 04:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:56.189 04:07:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.189 04:07:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.189 [2024-11-21 04:07:55.865661] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:56.189 [2024-11-21 04:07:55.865701] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:56.189 [2024-11-21 04:07:55.868366] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.189 [2024-11-21 04:07:55.868478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.189 [2024-11-21 04:07:55.868612] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:56.189 [2024-11-21 04:07:55.868627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:09:56.189 { 00:09:56.189 "results": [ 00:09:56.189 { 00:09:56.189 "job": "raid_bdev1", 00:09:56.189 "core_mask": "0x1", 00:09:56.189 "workload": "randrw", 00:09:56.189 "percentage": 50, 00:09:56.189 "status": "finished", 00:09:56.189 "queue_depth": 1, 00:09:56.189 "io_size": 131072, 00:09:56.189 "runtime": 1.381549, 00:09:56.189 "iops": 10925.41777381765, 00:09:56.189 "mibps": 1365.6772217272062, 00:09:56.189 "io_failed": 0, 00:09:56.189 "io_timeout": 0, 00:09:56.189 "avg_latency_us": 89.00290198887554, 00:09:56.189 "min_latency_us": 22.022707423580787, 00:09:56.189 "max_latency_us": 1480.9991266375546 00:09:56.189 } 00:09:56.189 ], 00:09:56.189 "core_count": 1 00:09:56.189 } 00:09:56.189 04:07:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.189 04:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80119 00:09:56.189 04:07:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 80119 ']' 00:09:56.189 04:07:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 80119 00:09:56.189 04:07:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:56.189 04:07:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.189 04:07:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80119 00:09:56.189 04:07:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.189 04:07:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.189 04:07:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80119' 00:09:56.189 killing process with pid 80119 00:09:56.189 04:07:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 80119 00:09:56.189 [2024-11-21 04:07:55.917988] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:56.189 04:07:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 80119 00:09:56.189 [2024-11-21 04:07:55.968121] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:56.466 04:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:56.466 04:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.j1xxuxri4y 00:09:56.466 04:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:56.466 04:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:56.466 ************************************ 00:09:56.466 END TEST raid_read_error_test 00:09:56.466 ************************************ 00:09:56.466 04:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:56.466 04:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:56.466 04:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:56.466 04:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:56.466 00:09:56.466 real 0m3.454s 00:09:56.466 user 0m4.292s 00:09:56.466 sys 0m0.633s 00:09:56.466 04:07:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.466 04:07:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.466 04:07:56 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:56.466 04:07:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:56.466 04:07:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.466 04:07:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:56.466 ************************************ 00:09:56.466 START TEST raid_write_error_test 00:09:56.466 ************************************ 00:09:56.466 04:07:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:09:56.466 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:56.466 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:56.466 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:56.466 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:56.466 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.466 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:56.466 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:56.466 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.466 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:56.466 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:56.466 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.466 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:56.466 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:56.466 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.466 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:56.466 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:56.466 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:56.466 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:56.466 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:56.466 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:56.466 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:56.466 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:56.466 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:56.467 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:56.467 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.J7QY8ARiEJ 00:09:56.467 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80248 00:09:56.467 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:56.467 04:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80248 00:09:56.467 04:07:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 80248 ']' 00:09:56.467 04:07:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.467 04:07:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.467 04:07:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.467 04:07:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.467 04:07:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.726 [2024-11-21 04:07:56.479019] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:09:56.726 [2024-11-21 04:07:56.479168] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80248 ] 00:09:56.726 [2024-11-21 04:07:56.636130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.726 [2024-11-21 04:07:56.674686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.985 [2024-11-21 04:07:56.750372] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.985 [2024-11-21 04:07:56.750416] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.552 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.552 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:57.552 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.552 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:57.552 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.552 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.552 BaseBdev1_malloc 00:09:57.552 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.553 true 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.553 [2024-11-21 04:07:57.345085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:57.553 [2024-11-21 04:07:57.345152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.553 [2024-11-21 04:07:57.345173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:57.553 [2024-11-21 04:07:57.345189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.553 [2024-11-21 04:07:57.347672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.553 [2024-11-21 04:07:57.347762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:57.553 BaseBdev1 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.553 BaseBdev2_malloc 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.553 true 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.553 [2024-11-21 04:07:57.391735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:57.553 [2024-11-21 04:07:57.391783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.553 [2024-11-21 04:07:57.391801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:57.553 [2024-11-21 04:07:57.391819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.553 [2024-11-21 04:07:57.394219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.553 [2024-11-21 04:07:57.394265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:57.553 BaseBdev2 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.553 BaseBdev3_malloc 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.553 true 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.553 [2024-11-21 04:07:57.438518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:57.553 [2024-11-21 04:07:57.438566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.553 [2024-11-21 04:07:57.438587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:57.553 [2024-11-21 04:07:57.438596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.553 [2024-11-21 04:07:57.441015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.553 [2024-11-21 04:07:57.441123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:57.553 BaseBdev3 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.553 [2024-11-21 04:07:57.450579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.553 [2024-11-21 04:07:57.452738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.553 [2024-11-21 04:07:57.452816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:57.553 [2024-11-21 04:07:57.453001] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:57.553 [2024-11-21 04:07:57.453018] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:57.553 [2024-11-21 04:07:57.453291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:09:57.553 [2024-11-21 04:07:57.453430] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:57.553 [2024-11-21 04:07:57.453446] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:09:57.553 [2024-11-21 04:07:57.453605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.553 "name": "raid_bdev1", 00:09:57.553 "uuid": "d92b828f-ddd5-4306-86cb-176652962459", 00:09:57.553 "strip_size_kb": 0, 00:09:57.553 "state": "online", 00:09:57.553 "raid_level": "raid1", 00:09:57.553 "superblock": true, 00:09:57.553 "num_base_bdevs": 3, 00:09:57.553 "num_base_bdevs_discovered": 3, 00:09:57.553 "num_base_bdevs_operational": 3, 00:09:57.553 "base_bdevs_list": [ 00:09:57.553 { 00:09:57.553 "name": "BaseBdev1", 00:09:57.553 "uuid": "88bb8236-d16b-57bb-a79c-ad8f9df0a73c", 00:09:57.553 "is_configured": true, 00:09:57.553 "data_offset": 2048, 00:09:57.553 "data_size": 63488 00:09:57.553 }, 00:09:57.553 { 00:09:57.553 "name": "BaseBdev2", 00:09:57.553 "uuid": "6eb71a84-e519-555d-bad8-b86f38ba8498", 00:09:57.553 "is_configured": true, 00:09:57.553 "data_offset": 2048, 00:09:57.553 "data_size": 63488 00:09:57.553 }, 00:09:57.553 { 00:09:57.553 "name": "BaseBdev3", 00:09:57.553 "uuid": "91555cd4-7177-502a-98a6-b39339bcf703", 00:09:57.553 "is_configured": true, 00:09:57.553 "data_offset": 2048, 00:09:57.553 "data_size": 63488 00:09:57.553 } 00:09:57.553 ] 00:09:57.553 }' 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.553 04:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.122 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:58.122 04:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:58.122 [2024-11-21 04:07:57.922282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002d50 00:09:59.058 04:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:59.058 04:07:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.058 04:07:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.058 [2024-11-21 04:07:58.838673] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:59.058 [2024-11-21 04:07:58.838839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:59.058 [2024-11-21 04:07:58.839144] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002d50 00:09:59.058 04:07:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.058 04:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:59.058 04:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:59.058 04:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:59.058 04:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:59.058 04:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:59.058 04:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.058 04:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.058 04:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.058 04:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.058 04:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:59.059 04:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.059 04:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.059 04:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.059 04:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.059 04:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.059 04:07:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.059 04:07:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.059 04:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.059 04:07:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.059 04:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.059 "name": "raid_bdev1", 00:09:59.059 "uuid": "d92b828f-ddd5-4306-86cb-176652962459", 00:09:59.059 "strip_size_kb": 0, 00:09:59.059 "state": "online", 00:09:59.059 "raid_level": "raid1", 00:09:59.059 "superblock": true, 00:09:59.059 "num_base_bdevs": 3, 00:09:59.059 "num_base_bdevs_discovered": 2, 00:09:59.059 "num_base_bdevs_operational": 2, 00:09:59.059 "base_bdevs_list": [ 00:09:59.059 { 00:09:59.059 "name": null, 00:09:59.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.059 "is_configured": false, 00:09:59.059 "data_offset": 0, 00:09:59.059 "data_size": 63488 00:09:59.059 }, 00:09:59.059 { 00:09:59.059 "name": "BaseBdev2", 00:09:59.059 "uuid": "6eb71a84-e519-555d-bad8-b86f38ba8498", 00:09:59.059 "is_configured": true, 00:09:59.059 "data_offset": 2048, 00:09:59.059 "data_size": 63488 00:09:59.059 }, 00:09:59.059 { 00:09:59.059 "name": "BaseBdev3", 00:09:59.059 "uuid": "91555cd4-7177-502a-98a6-b39339bcf703", 00:09:59.059 "is_configured": true, 00:09:59.059 "data_offset": 2048, 00:09:59.059 "data_size": 63488 00:09:59.059 } 00:09:59.059 ] 00:09:59.059 }' 00:09:59.059 04:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.059 04:07:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.628 04:07:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:59.628 04:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.628 04:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.628 [2024-11-21 04:07:59.306239] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:59.628 [2024-11-21 04:07:59.306278] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.628 [2024-11-21 04:07:59.308938] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.628 [2024-11-21 04:07:59.309069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.628 [2024-11-21 04:07:59.309187] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:59.628 [2024-11-21 04:07:59.309200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:09:59.628 { 00:09:59.628 "results": [ 00:09:59.628 { 00:09:59.628 "job": "raid_bdev1", 00:09:59.628 "core_mask": "0x1", 00:09:59.628 "workload": "randrw", 00:09:59.628 "percentage": 50, 00:09:59.628 "status": "finished", 00:09:59.628 "queue_depth": 1, 00:09:59.628 "io_size": 131072, 00:09:59.628 "runtime": 1.384462, 00:09:59.628 "iops": 12019.831530226182, 00:09:59.628 "mibps": 1502.4789412782727, 00:09:59.628 "io_failed": 0, 00:09:59.628 "io_timeout": 0, 00:09:59.628 "avg_latency_us": 80.59220744050643, 00:09:59.628 "min_latency_us": 22.134497816593885, 00:09:59.628 "max_latency_us": 1488.1537117903931 00:09:59.628 } 00:09:59.628 ], 00:09:59.628 "core_count": 1 00:09:59.628 } 00:09:59.628 04:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.628 04:07:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80248 00:09:59.628 04:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 80248 ']' 00:09:59.628 04:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 80248 00:09:59.628 04:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:59.628 04:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.628 04:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80248 00:09:59.628 killing process with pid 80248 00:09:59.628 04:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:59.628 04:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:59.628 04:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80248' 00:09:59.628 04:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 80248 00:09:59.628 [2024-11-21 04:07:59.357580] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:59.628 04:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 80248 00:09:59.628 [2024-11-21 04:07:59.408962] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:59.887 04:07:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.J7QY8ARiEJ 00:09:59.887 04:07:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:59.887 04:07:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:59.887 04:07:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:59.887 04:07:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:59.888 ************************************ 00:09:59.888 END TEST raid_write_error_test 00:09:59.888 ************************************ 00:09:59.888 04:07:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:59.888 04:07:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:59.888 04:07:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:59.888 00:09:59.888 real 0m3.381s 00:09:59.888 user 0m4.093s 00:09:59.888 sys 0m0.658s 00:09:59.888 04:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.888 04:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.888 04:07:59 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:59.888 04:07:59 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:59.888 04:07:59 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:59.888 04:07:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:59.888 04:07:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.888 04:07:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:59.888 ************************************ 00:09:59.888 START TEST raid_state_function_test 00:09:59.888 ************************************ 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80375 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80375' 00:09:59.888 Process raid pid: 80375 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80375 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80375 ']' 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.888 04:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.147 [2024-11-21 04:07:59.931425] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:10:00.147 [2024-11-21 04:07:59.931657] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.147 [2024-11-21 04:08:00.092547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.407 [2024-11-21 04:08:00.135746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.407 [2024-11-21 04:08:00.216521] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.407 [2024-11-21 04:08:00.216560] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.978 04:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.978 04:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:00.978 04:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:00.978 04:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.978 04:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.978 [2024-11-21 04:08:00.802430] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:00.978 [2024-11-21 04:08:00.802581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:00.978 [2024-11-21 04:08:00.802610] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:00.978 [2024-11-21 04:08:00.802624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:00.978 [2024-11-21 04:08:00.802631] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:00.978 [2024-11-21 04:08:00.802646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:00.978 [2024-11-21 04:08:00.802652] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:00.978 [2024-11-21 04:08:00.802662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:00.978 04:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.978 04:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:00.978 04:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.978 04:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.978 04:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.978 04:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.978 04:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.978 04:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.978 04:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.978 04:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.978 04:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.978 04:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.978 04:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.978 04:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.978 04:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.978 04:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.978 04:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.978 "name": "Existed_Raid", 00:10:00.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.978 "strip_size_kb": 64, 00:10:00.978 "state": "configuring", 00:10:00.978 "raid_level": "raid0", 00:10:00.978 "superblock": false, 00:10:00.978 "num_base_bdevs": 4, 00:10:00.978 "num_base_bdevs_discovered": 0, 00:10:00.978 "num_base_bdevs_operational": 4, 00:10:00.978 "base_bdevs_list": [ 00:10:00.978 { 00:10:00.978 "name": "BaseBdev1", 00:10:00.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.978 "is_configured": false, 00:10:00.978 "data_offset": 0, 00:10:00.978 "data_size": 0 00:10:00.978 }, 00:10:00.978 { 00:10:00.978 "name": "BaseBdev2", 00:10:00.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.978 "is_configured": false, 00:10:00.978 "data_offset": 0, 00:10:00.978 "data_size": 0 00:10:00.978 }, 00:10:00.978 { 00:10:00.978 "name": "BaseBdev3", 00:10:00.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.978 "is_configured": false, 00:10:00.978 "data_offset": 0, 00:10:00.978 "data_size": 0 00:10:00.978 }, 00:10:00.978 { 00:10:00.978 "name": "BaseBdev4", 00:10:00.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.978 "is_configured": false, 00:10:00.978 "data_offset": 0, 00:10:00.978 "data_size": 0 00:10:00.978 } 00:10:00.978 ] 00:10:00.978 }' 00:10:00.978 04:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.978 04:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.548 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:01.548 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.548 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.548 [2024-11-21 04:08:01.229581] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:01.548 [2024-11-21 04:08:01.229722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.549 [2024-11-21 04:08:01.241552] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:01.549 [2024-11-21 04:08:01.241654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:01.549 [2024-11-21 04:08:01.241684] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:01.549 [2024-11-21 04:08:01.241707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:01.549 [2024-11-21 04:08:01.241725] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:01.549 [2024-11-21 04:08:01.241746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:01.549 [2024-11-21 04:08:01.241763] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:01.549 [2024-11-21 04:08:01.241829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.549 [2024-11-21 04:08:01.269551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.549 BaseBdev1 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.549 [ 00:10:01.549 { 00:10:01.549 "name": "BaseBdev1", 00:10:01.549 "aliases": [ 00:10:01.549 "fdb969f7-0a6e-47ac-9d38-efd10bc4f2f5" 00:10:01.549 ], 00:10:01.549 "product_name": "Malloc disk", 00:10:01.549 "block_size": 512, 00:10:01.549 "num_blocks": 65536, 00:10:01.549 "uuid": "fdb969f7-0a6e-47ac-9d38-efd10bc4f2f5", 00:10:01.549 "assigned_rate_limits": { 00:10:01.549 "rw_ios_per_sec": 0, 00:10:01.549 "rw_mbytes_per_sec": 0, 00:10:01.549 "r_mbytes_per_sec": 0, 00:10:01.549 "w_mbytes_per_sec": 0 00:10:01.549 }, 00:10:01.549 "claimed": true, 00:10:01.549 "claim_type": "exclusive_write", 00:10:01.549 "zoned": false, 00:10:01.549 "supported_io_types": { 00:10:01.549 "read": true, 00:10:01.549 "write": true, 00:10:01.549 "unmap": true, 00:10:01.549 "flush": true, 00:10:01.549 "reset": true, 00:10:01.549 "nvme_admin": false, 00:10:01.549 "nvme_io": false, 00:10:01.549 "nvme_io_md": false, 00:10:01.549 "write_zeroes": true, 00:10:01.549 "zcopy": true, 00:10:01.549 "get_zone_info": false, 00:10:01.549 "zone_management": false, 00:10:01.549 "zone_append": false, 00:10:01.549 "compare": false, 00:10:01.549 "compare_and_write": false, 00:10:01.549 "abort": true, 00:10:01.549 "seek_hole": false, 00:10:01.549 "seek_data": false, 00:10:01.549 "copy": true, 00:10:01.549 "nvme_iov_md": false 00:10:01.549 }, 00:10:01.549 "memory_domains": [ 00:10:01.549 { 00:10:01.549 "dma_device_id": "system", 00:10:01.549 "dma_device_type": 1 00:10:01.549 }, 00:10:01.549 { 00:10:01.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.549 "dma_device_type": 2 00:10:01.549 } 00:10:01.549 ], 00:10:01.549 "driver_specific": {} 00:10:01.549 } 00:10:01.549 ] 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.549 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.549 "name": "Existed_Raid", 00:10:01.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.549 "strip_size_kb": 64, 00:10:01.549 "state": "configuring", 00:10:01.549 "raid_level": "raid0", 00:10:01.549 "superblock": false, 00:10:01.549 "num_base_bdevs": 4, 00:10:01.549 "num_base_bdevs_discovered": 1, 00:10:01.549 "num_base_bdevs_operational": 4, 00:10:01.549 "base_bdevs_list": [ 00:10:01.549 { 00:10:01.549 "name": "BaseBdev1", 00:10:01.549 "uuid": "fdb969f7-0a6e-47ac-9d38-efd10bc4f2f5", 00:10:01.549 "is_configured": true, 00:10:01.549 "data_offset": 0, 00:10:01.549 "data_size": 65536 00:10:01.549 }, 00:10:01.549 { 00:10:01.549 "name": "BaseBdev2", 00:10:01.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.549 "is_configured": false, 00:10:01.549 "data_offset": 0, 00:10:01.549 "data_size": 0 00:10:01.549 }, 00:10:01.549 { 00:10:01.549 "name": "BaseBdev3", 00:10:01.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.549 "is_configured": false, 00:10:01.549 "data_offset": 0, 00:10:01.549 "data_size": 0 00:10:01.549 }, 00:10:01.549 { 00:10:01.549 "name": "BaseBdev4", 00:10:01.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.549 "is_configured": false, 00:10:01.549 "data_offset": 0, 00:10:01.549 "data_size": 0 00:10:01.549 } 00:10:01.549 ] 00:10:01.549 }' 00:10:01.550 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.550 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.810 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:01.810 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.810 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.070 [2024-11-21 04:08:01.784752] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.070 [2024-11-21 04:08:01.784921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:02.070 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.070 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:02.070 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.070 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.070 [2024-11-21 04:08:01.796752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.070 [2024-11-21 04:08:01.799067] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.070 [2024-11-21 04:08:01.799154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.070 [2024-11-21 04:08:01.799169] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.070 [2024-11-21 04:08:01.799179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.070 [2024-11-21 04:08:01.799185] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:02.070 [2024-11-21 04:08:01.799195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:02.070 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.070 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:02.070 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:02.070 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:02.070 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.070 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.070 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.070 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.070 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.070 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.070 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.070 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.070 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.070 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.070 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.070 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.070 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.070 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.070 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.070 "name": "Existed_Raid", 00:10:02.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.070 "strip_size_kb": 64, 00:10:02.070 "state": "configuring", 00:10:02.070 "raid_level": "raid0", 00:10:02.070 "superblock": false, 00:10:02.070 "num_base_bdevs": 4, 00:10:02.070 "num_base_bdevs_discovered": 1, 00:10:02.070 "num_base_bdevs_operational": 4, 00:10:02.070 "base_bdevs_list": [ 00:10:02.070 { 00:10:02.070 "name": "BaseBdev1", 00:10:02.070 "uuid": "fdb969f7-0a6e-47ac-9d38-efd10bc4f2f5", 00:10:02.070 "is_configured": true, 00:10:02.070 "data_offset": 0, 00:10:02.070 "data_size": 65536 00:10:02.070 }, 00:10:02.070 { 00:10:02.070 "name": "BaseBdev2", 00:10:02.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.070 "is_configured": false, 00:10:02.070 "data_offset": 0, 00:10:02.070 "data_size": 0 00:10:02.070 }, 00:10:02.070 { 00:10:02.070 "name": "BaseBdev3", 00:10:02.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.070 "is_configured": false, 00:10:02.070 "data_offset": 0, 00:10:02.070 "data_size": 0 00:10:02.070 }, 00:10:02.070 { 00:10:02.070 "name": "BaseBdev4", 00:10:02.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.070 "is_configured": false, 00:10:02.071 "data_offset": 0, 00:10:02.071 "data_size": 0 00:10:02.071 } 00:10:02.071 ] 00:10:02.071 }' 00:10:02.071 04:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.071 04:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.330 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:02.330 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.330 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.330 [2024-11-21 04:08:02.285532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:02.330 BaseBdev2 00:10:02.330 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.330 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:02.330 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:02.330 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.330 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:02.330 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.330 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.330 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.330 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.330 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.591 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.591 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:02.591 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.591 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.591 [ 00:10:02.591 { 00:10:02.591 "name": "BaseBdev2", 00:10:02.591 "aliases": [ 00:10:02.591 "586e5c93-3344-42c9-a19d-5d70d448afb5" 00:10:02.591 ], 00:10:02.591 "product_name": "Malloc disk", 00:10:02.591 "block_size": 512, 00:10:02.591 "num_blocks": 65536, 00:10:02.591 "uuid": "586e5c93-3344-42c9-a19d-5d70d448afb5", 00:10:02.591 "assigned_rate_limits": { 00:10:02.591 "rw_ios_per_sec": 0, 00:10:02.591 "rw_mbytes_per_sec": 0, 00:10:02.591 "r_mbytes_per_sec": 0, 00:10:02.591 "w_mbytes_per_sec": 0 00:10:02.591 }, 00:10:02.591 "claimed": true, 00:10:02.591 "claim_type": "exclusive_write", 00:10:02.591 "zoned": false, 00:10:02.591 "supported_io_types": { 00:10:02.591 "read": true, 00:10:02.591 "write": true, 00:10:02.591 "unmap": true, 00:10:02.591 "flush": true, 00:10:02.591 "reset": true, 00:10:02.591 "nvme_admin": false, 00:10:02.591 "nvme_io": false, 00:10:02.591 "nvme_io_md": false, 00:10:02.591 "write_zeroes": true, 00:10:02.591 "zcopy": true, 00:10:02.591 "get_zone_info": false, 00:10:02.591 "zone_management": false, 00:10:02.591 "zone_append": false, 00:10:02.591 "compare": false, 00:10:02.591 "compare_and_write": false, 00:10:02.591 "abort": true, 00:10:02.591 "seek_hole": false, 00:10:02.591 "seek_data": false, 00:10:02.591 "copy": true, 00:10:02.591 "nvme_iov_md": false 00:10:02.591 }, 00:10:02.591 "memory_domains": [ 00:10:02.591 { 00:10:02.591 "dma_device_id": "system", 00:10:02.591 "dma_device_type": 1 00:10:02.591 }, 00:10:02.591 { 00:10:02.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.591 "dma_device_type": 2 00:10:02.591 } 00:10:02.591 ], 00:10:02.591 "driver_specific": {} 00:10:02.591 } 00:10:02.591 ] 00:10:02.591 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.591 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:02.591 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:02.591 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:02.591 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:02.591 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.591 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.591 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.591 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.591 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.591 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.591 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.591 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.591 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.591 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.591 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.591 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.591 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.591 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.591 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.591 "name": "Existed_Raid", 00:10:02.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.591 "strip_size_kb": 64, 00:10:02.591 "state": "configuring", 00:10:02.591 "raid_level": "raid0", 00:10:02.591 "superblock": false, 00:10:02.591 "num_base_bdevs": 4, 00:10:02.591 "num_base_bdevs_discovered": 2, 00:10:02.591 "num_base_bdevs_operational": 4, 00:10:02.591 "base_bdevs_list": [ 00:10:02.591 { 00:10:02.591 "name": "BaseBdev1", 00:10:02.591 "uuid": "fdb969f7-0a6e-47ac-9d38-efd10bc4f2f5", 00:10:02.591 "is_configured": true, 00:10:02.591 "data_offset": 0, 00:10:02.591 "data_size": 65536 00:10:02.591 }, 00:10:02.591 { 00:10:02.591 "name": "BaseBdev2", 00:10:02.591 "uuid": "586e5c93-3344-42c9-a19d-5d70d448afb5", 00:10:02.591 "is_configured": true, 00:10:02.591 "data_offset": 0, 00:10:02.591 "data_size": 65536 00:10:02.591 }, 00:10:02.591 { 00:10:02.591 "name": "BaseBdev3", 00:10:02.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.591 "is_configured": false, 00:10:02.591 "data_offset": 0, 00:10:02.591 "data_size": 0 00:10:02.591 }, 00:10:02.591 { 00:10:02.591 "name": "BaseBdev4", 00:10:02.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.591 "is_configured": false, 00:10:02.591 "data_offset": 0, 00:10:02.591 "data_size": 0 00:10:02.591 } 00:10:02.591 ] 00:10:02.591 }' 00:10:02.591 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.591 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.852 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:02.852 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.852 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.852 [2024-11-21 04:08:02.765514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:02.852 BaseBdev3 00:10:02.852 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.852 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:02.852 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:02.852 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.852 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:02.852 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.852 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.852 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.852 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.852 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.852 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.852 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:02.852 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.852 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.852 [ 00:10:02.852 { 00:10:02.852 "name": "BaseBdev3", 00:10:02.852 "aliases": [ 00:10:02.852 "3095786a-24b5-4659-8218-07db003514e5" 00:10:02.852 ], 00:10:02.852 "product_name": "Malloc disk", 00:10:02.852 "block_size": 512, 00:10:02.852 "num_blocks": 65536, 00:10:02.852 "uuid": "3095786a-24b5-4659-8218-07db003514e5", 00:10:02.852 "assigned_rate_limits": { 00:10:02.852 "rw_ios_per_sec": 0, 00:10:02.852 "rw_mbytes_per_sec": 0, 00:10:02.852 "r_mbytes_per_sec": 0, 00:10:02.852 "w_mbytes_per_sec": 0 00:10:02.852 }, 00:10:02.852 "claimed": true, 00:10:02.852 "claim_type": "exclusive_write", 00:10:02.852 "zoned": false, 00:10:02.852 "supported_io_types": { 00:10:02.852 "read": true, 00:10:02.852 "write": true, 00:10:02.852 "unmap": true, 00:10:02.852 "flush": true, 00:10:02.852 "reset": true, 00:10:02.852 "nvme_admin": false, 00:10:02.852 "nvme_io": false, 00:10:02.852 "nvme_io_md": false, 00:10:02.852 "write_zeroes": true, 00:10:02.852 "zcopy": true, 00:10:02.852 "get_zone_info": false, 00:10:02.853 "zone_management": false, 00:10:02.853 "zone_append": false, 00:10:02.853 "compare": false, 00:10:02.853 "compare_and_write": false, 00:10:02.853 "abort": true, 00:10:02.853 "seek_hole": false, 00:10:02.853 "seek_data": false, 00:10:02.853 "copy": true, 00:10:02.853 "nvme_iov_md": false 00:10:02.853 }, 00:10:02.853 "memory_domains": [ 00:10:02.853 { 00:10:02.853 "dma_device_id": "system", 00:10:02.853 "dma_device_type": 1 00:10:02.853 }, 00:10:02.853 { 00:10:02.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.853 "dma_device_type": 2 00:10:02.853 } 00:10:02.853 ], 00:10:02.853 "driver_specific": {} 00:10:02.853 } 00:10:02.853 ] 00:10:02.853 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.853 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:02.853 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:02.853 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:02.853 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:02.853 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.853 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.853 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.853 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.853 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.853 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.853 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.853 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.853 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.853 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.853 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.853 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.853 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.114 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.114 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.114 "name": "Existed_Raid", 00:10:03.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.114 "strip_size_kb": 64, 00:10:03.114 "state": "configuring", 00:10:03.114 "raid_level": "raid0", 00:10:03.114 "superblock": false, 00:10:03.114 "num_base_bdevs": 4, 00:10:03.114 "num_base_bdevs_discovered": 3, 00:10:03.114 "num_base_bdevs_operational": 4, 00:10:03.114 "base_bdevs_list": [ 00:10:03.114 { 00:10:03.114 "name": "BaseBdev1", 00:10:03.114 "uuid": "fdb969f7-0a6e-47ac-9d38-efd10bc4f2f5", 00:10:03.114 "is_configured": true, 00:10:03.114 "data_offset": 0, 00:10:03.114 "data_size": 65536 00:10:03.114 }, 00:10:03.114 { 00:10:03.114 "name": "BaseBdev2", 00:10:03.114 "uuid": "586e5c93-3344-42c9-a19d-5d70d448afb5", 00:10:03.114 "is_configured": true, 00:10:03.114 "data_offset": 0, 00:10:03.114 "data_size": 65536 00:10:03.114 }, 00:10:03.114 { 00:10:03.114 "name": "BaseBdev3", 00:10:03.114 "uuid": "3095786a-24b5-4659-8218-07db003514e5", 00:10:03.114 "is_configured": true, 00:10:03.114 "data_offset": 0, 00:10:03.114 "data_size": 65536 00:10:03.114 }, 00:10:03.114 { 00:10:03.114 "name": "BaseBdev4", 00:10:03.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.114 "is_configured": false, 00:10:03.114 "data_offset": 0, 00:10:03.114 "data_size": 0 00:10:03.114 } 00:10:03.114 ] 00:10:03.114 }' 00:10:03.114 04:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.114 04:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.376 [2024-11-21 04:08:03.266374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:03.376 [2024-11-21 04:08:03.266434] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:03.376 [2024-11-21 04:08:03.266446] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:03.376 [2024-11-21 04:08:03.266783] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:03.376 [2024-11-21 04:08:03.266929] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:03.376 [2024-11-21 04:08:03.266941] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:03.376 [2024-11-21 04:08:03.267184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.376 BaseBdev4 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.376 [ 00:10:03.376 { 00:10:03.376 "name": "BaseBdev4", 00:10:03.376 "aliases": [ 00:10:03.376 "776924cb-949b-4176-94ff-1e0d73b2bc22" 00:10:03.376 ], 00:10:03.376 "product_name": "Malloc disk", 00:10:03.376 "block_size": 512, 00:10:03.376 "num_blocks": 65536, 00:10:03.376 "uuid": "776924cb-949b-4176-94ff-1e0d73b2bc22", 00:10:03.376 "assigned_rate_limits": { 00:10:03.376 "rw_ios_per_sec": 0, 00:10:03.376 "rw_mbytes_per_sec": 0, 00:10:03.376 "r_mbytes_per_sec": 0, 00:10:03.376 "w_mbytes_per_sec": 0 00:10:03.376 }, 00:10:03.376 "claimed": true, 00:10:03.376 "claim_type": "exclusive_write", 00:10:03.376 "zoned": false, 00:10:03.376 "supported_io_types": { 00:10:03.376 "read": true, 00:10:03.376 "write": true, 00:10:03.376 "unmap": true, 00:10:03.376 "flush": true, 00:10:03.376 "reset": true, 00:10:03.376 "nvme_admin": false, 00:10:03.376 "nvme_io": false, 00:10:03.376 "nvme_io_md": false, 00:10:03.376 "write_zeroes": true, 00:10:03.376 "zcopy": true, 00:10:03.376 "get_zone_info": false, 00:10:03.376 "zone_management": false, 00:10:03.376 "zone_append": false, 00:10:03.376 "compare": false, 00:10:03.376 "compare_and_write": false, 00:10:03.376 "abort": true, 00:10:03.376 "seek_hole": false, 00:10:03.376 "seek_data": false, 00:10:03.376 "copy": true, 00:10:03.376 "nvme_iov_md": false 00:10:03.376 }, 00:10:03.376 "memory_domains": [ 00:10:03.376 { 00:10:03.376 "dma_device_id": "system", 00:10:03.376 "dma_device_type": 1 00:10:03.376 }, 00:10:03.376 { 00:10:03.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.376 "dma_device_type": 2 00:10:03.376 } 00:10:03.376 ], 00:10:03.376 "driver_specific": {} 00:10:03.376 } 00:10:03.376 ] 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.376 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.639 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.639 "name": "Existed_Raid", 00:10:03.639 "uuid": "4bbf0f59-139a-4284-8a05-c7e78e29fe27", 00:10:03.639 "strip_size_kb": 64, 00:10:03.639 "state": "online", 00:10:03.639 "raid_level": "raid0", 00:10:03.639 "superblock": false, 00:10:03.639 "num_base_bdevs": 4, 00:10:03.639 "num_base_bdevs_discovered": 4, 00:10:03.639 "num_base_bdevs_operational": 4, 00:10:03.639 "base_bdevs_list": [ 00:10:03.639 { 00:10:03.639 "name": "BaseBdev1", 00:10:03.639 "uuid": "fdb969f7-0a6e-47ac-9d38-efd10bc4f2f5", 00:10:03.639 "is_configured": true, 00:10:03.639 "data_offset": 0, 00:10:03.639 "data_size": 65536 00:10:03.639 }, 00:10:03.639 { 00:10:03.639 "name": "BaseBdev2", 00:10:03.639 "uuid": "586e5c93-3344-42c9-a19d-5d70d448afb5", 00:10:03.639 "is_configured": true, 00:10:03.639 "data_offset": 0, 00:10:03.639 "data_size": 65536 00:10:03.639 }, 00:10:03.639 { 00:10:03.639 "name": "BaseBdev3", 00:10:03.639 "uuid": "3095786a-24b5-4659-8218-07db003514e5", 00:10:03.639 "is_configured": true, 00:10:03.639 "data_offset": 0, 00:10:03.639 "data_size": 65536 00:10:03.639 }, 00:10:03.639 { 00:10:03.639 "name": "BaseBdev4", 00:10:03.639 "uuid": "776924cb-949b-4176-94ff-1e0d73b2bc22", 00:10:03.639 "is_configured": true, 00:10:03.639 "data_offset": 0, 00:10:03.639 "data_size": 65536 00:10:03.639 } 00:10:03.639 ] 00:10:03.639 }' 00:10:03.639 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.639 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.899 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:03.899 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:03.899 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:03.899 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:03.899 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:03.899 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:03.899 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:03.899 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:03.899 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.899 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.899 [2024-11-21 04:08:03.730070] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.899 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.899 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:03.899 "name": "Existed_Raid", 00:10:03.899 "aliases": [ 00:10:03.899 "4bbf0f59-139a-4284-8a05-c7e78e29fe27" 00:10:03.899 ], 00:10:03.899 "product_name": "Raid Volume", 00:10:03.899 "block_size": 512, 00:10:03.899 "num_blocks": 262144, 00:10:03.899 "uuid": "4bbf0f59-139a-4284-8a05-c7e78e29fe27", 00:10:03.899 "assigned_rate_limits": { 00:10:03.899 "rw_ios_per_sec": 0, 00:10:03.899 "rw_mbytes_per_sec": 0, 00:10:03.899 "r_mbytes_per_sec": 0, 00:10:03.899 "w_mbytes_per_sec": 0 00:10:03.899 }, 00:10:03.899 "claimed": false, 00:10:03.899 "zoned": false, 00:10:03.899 "supported_io_types": { 00:10:03.899 "read": true, 00:10:03.899 "write": true, 00:10:03.899 "unmap": true, 00:10:03.899 "flush": true, 00:10:03.899 "reset": true, 00:10:03.899 "nvme_admin": false, 00:10:03.899 "nvme_io": false, 00:10:03.899 "nvme_io_md": false, 00:10:03.899 "write_zeroes": true, 00:10:03.899 "zcopy": false, 00:10:03.899 "get_zone_info": false, 00:10:03.899 "zone_management": false, 00:10:03.899 "zone_append": false, 00:10:03.899 "compare": false, 00:10:03.899 "compare_and_write": false, 00:10:03.899 "abort": false, 00:10:03.899 "seek_hole": false, 00:10:03.899 "seek_data": false, 00:10:03.899 "copy": false, 00:10:03.899 "nvme_iov_md": false 00:10:03.899 }, 00:10:03.899 "memory_domains": [ 00:10:03.899 { 00:10:03.899 "dma_device_id": "system", 00:10:03.899 "dma_device_type": 1 00:10:03.899 }, 00:10:03.899 { 00:10:03.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.899 "dma_device_type": 2 00:10:03.899 }, 00:10:03.899 { 00:10:03.899 "dma_device_id": "system", 00:10:03.899 "dma_device_type": 1 00:10:03.899 }, 00:10:03.899 { 00:10:03.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.899 "dma_device_type": 2 00:10:03.899 }, 00:10:03.899 { 00:10:03.899 "dma_device_id": "system", 00:10:03.899 "dma_device_type": 1 00:10:03.899 }, 00:10:03.899 { 00:10:03.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.899 "dma_device_type": 2 00:10:03.899 }, 00:10:03.899 { 00:10:03.899 "dma_device_id": "system", 00:10:03.899 "dma_device_type": 1 00:10:03.899 }, 00:10:03.899 { 00:10:03.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.899 "dma_device_type": 2 00:10:03.899 } 00:10:03.899 ], 00:10:03.899 "driver_specific": { 00:10:03.899 "raid": { 00:10:03.899 "uuid": "4bbf0f59-139a-4284-8a05-c7e78e29fe27", 00:10:03.899 "strip_size_kb": 64, 00:10:03.899 "state": "online", 00:10:03.899 "raid_level": "raid0", 00:10:03.899 "superblock": false, 00:10:03.899 "num_base_bdevs": 4, 00:10:03.899 "num_base_bdevs_discovered": 4, 00:10:03.899 "num_base_bdevs_operational": 4, 00:10:03.899 "base_bdevs_list": [ 00:10:03.899 { 00:10:03.899 "name": "BaseBdev1", 00:10:03.899 "uuid": "fdb969f7-0a6e-47ac-9d38-efd10bc4f2f5", 00:10:03.899 "is_configured": true, 00:10:03.899 "data_offset": 0, 00:10:03.899 "data_size": 65536 00:10:03.899 }, 00:10:03.899 { 00:10:03.899 "name": "BaseBdev2", 00:10:03.899 "uuid": "586e5c93-3344-42c9-a19d-5d70d448afb5", 00:10:03.899 "is_configured": true, 00:10:03.899 "data_offset": 0, 00:10:03.899 "data_size": 65536 00:10:03.899 }, 00:10:03.899 { 00:10:03.899 "name": "BaseBdev3", 00:10:03.899 "uuid": "3095786a-24b5-4659-8218-07db003514e5", 00:10:03.899 "is_configured": true, 00:10:03.899 "data_offset": 0, 00:10:03.899 "data_size": 65536 00:10:03.899 }, 00:10:03.899 { 00:10:03.899 "name": "BaseBdev4", 00:10:03.900 "uuid": "776924cb-949b-4176-94ff-1e0d73b2bc22", 00:10:03.900 "is_configured": true, 00:10:03.900 "data_offset": 0, 00:10:03.900 "data_size": 65536 00:10:03.900 } 00:10:03.900 ] 00:10:03.900 } 00:10:03.900 } 00:10:03.900 }' 00:10:03.900 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:03.900 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:03.900 BaseBdev2 00:10:03.900 BaseBdev3 00:10:03.900 BaseBdev4' 00:10:03.900 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.900 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:03.900 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.900 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:03.900 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.900 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.900 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.900 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.160 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.160 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.160 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.160 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:04.160 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.160 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.160 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.160 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.160 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.160 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.160 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.160 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:04.160 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.160 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.160 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.160 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.160 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.160 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.160 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.160 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.160 04:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:04.160 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.160 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.160 04:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.160 [2024-11-21 04:08:04.029265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:04.160 [2024-11-21 04:08:04.029358] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.160 [2024-11-21 04:08:04.029437] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.160 "name": "Existed_Raid", 00:10:04.160 "uuid": "4bbf0f59-139a-4284-8a05-c7e78e29fe27", 00:10:04.160 "strip_size_kb": 64, 00:10:04.160 "state": "offline", 00:10:04.160 "raid_level": "raid0", 00:10:04.160 "superblock": false, 00:10:04.160 "num_base_bdevs": 4, 00:10:04.160 "num_base_bdevs_discovered": 3, 00:10:04.160 "num_base_bdevs_operational": 3, 00:10:04.160 "base_bdevs_list": [ 00:10:04.160 { 00:10:04.160 "name": null, 00:10:04.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.160 "is_configured": false, 00:10:04.160 "data_offset": 0, 00:10:04.160 "data_size": 65536 00:10:04.160 }, 00:10:04.160 { 00:10:04.160 "name": "BaseBdev2", 00:10:04.160 "uuid": "586e5c93-3344-42c9-a19d-5d70d448afb5", 00:10:04.160 "is_configured": true, 00:10:04.160 "data_offset": 0, 00:10:04.160 "data_size": 65536 00:10:04.160 }, 00:10:04.160 { 00:10:04.160 "name": "BaseBdev3", 00:10:04.160 "uuid": "3095786a-24b5-4659-8218-07db003514e5", 00:10:04.160 "is_configured": true, 00:10:04.160 "data_offset": 0, 00:10:04.160 "data_size": 65536 00:10:04.160 }, 00:10:04.160 { 00:10:04.160 "name": "BaseBdev4", 00:10:04.160 "uuid": "776924cb-949b-4176-94ff-1e0d73b2bc22", 00:10:04.160 "is_configured": true, 00:10:04.160 "data_offset": 0, 00:10:04.160 "data_size": 65536 00:10:04.160 } 00:10:04.160 ] 00:10:04.160 }' 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.160 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.729 [2024-11-21 04:08:04.522428] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.729 [2024-11-21 04:08:04.599882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.729 [2024-11-21 04:08:04.677133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:04.729 [2024-11-21 04:08:04.677293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:04.729 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.990 BaseBdev2 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.990 [ 00:10:04.990 { 00:10:04.990 "name": "BaseBdev2", 00:10:04.990 "aliases": [ 00:10:04.990 "ac0c3b16-68b2-4b5c-ade6-abb47b841ac6" 00:10:04.990 ], 00:10:04.990 "product_name": "Malloc disk", 00:10:04.990 "block_size": 512, 00:10:04.990 "num_blocks": 65536, 00:10:04.990 "uuid": "ac0c3b16-68b2-4b5c-ade6-abb47b841ac6", 00:10:04.990 "assigned_rate_limits": { 00:10:04.990 "rw_ios_per_sec": 0, 00:10:04.990 "rw_mbytes_per_sec": 0, 00:10:04.990 "r_mbytes_per_sec": 0, 00:10:04.990 "w_mbytes_per_sec": 0 00:10:04.990 }, 00:10:04.990 "claimed": false, 00:10:04.990 "zoned": false, 00:10:04.990 "supported_io_types": { 00:10:04.990 "read": true, 00:10:04.990 "write": true, 00:10:04.990 "unmap": true, 00:10:04.990 "flush": true, 00:10:04.990 "reset": true, 00:10:04.990 "nvme_admin": false, 00:10:04.990 "nvme_io": false, 00:10:04.990 "nvme_io_md": false, 00:10:04.990 "write_zeroes": true, 00:10:04.990 "zcopy": true, 00:10:04.990 "get_zone_info": false, 00:10:04.990 "zone_management": false, 00:10:04.990 "zone_append": false, 00:10:04.990 "compare": false, 00:10:04.990 "compare_and_write": false, 00:10:04.990 "abort": true, 00:10:04.990 "seek_hole": false, 00:10:04.990 "seek_data": false, 00:10:04.990 "copy": true, 00:10:04.990 "nvme_iov_md": false 00:10:04.990 }, 00:10:04.990 "memory_domains": [ 00:10:04.990 { 00:10:04.990 "dma_device_id": "system", 00:10:04.990 "dma_device_type": 1 00:10:04.990 }, 00:10:04.990 { 00:10:04.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.990 "dma_device_type": 2 00:10:04.990 } 00:10:04.990 ], 00:10:04.990 "driver_specific": {} 00:10:04.990 } 00:10:04.990 ] 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.990 BaseBdev3 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:04.990 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.991 [ 00:10:04.991 { 00:10:04.991 "name": "BaseBdev3", 00:10:04.991 "aliases": [ 00:10:04.991 "f86cdfea-11bb-43fe-8bb2-f0214e49997c" 00:10:04.991 ], 00:10:04.991 "product_name": "Malloc disk", 00:10:04.991 "block_size": 512, 00:10:04.991 "num_blocks": 65536, 00:10:04.991 "uuid": "f86cdfea-11bb-43fe-8bb2-f0214e49997c", 00:10:04.991 "assigned_rate_limits": { 00:10:04.991 "rw_ios_per_sec": 0, 00:10:04.991 "rw_mbytes_per_sec": 0, 00:10:04.991 "r_mbytes_per_sec": 0, 00:10:04.991 "w_mbytes_per_sec": 0 00:10:04.991 }, 00:10:04.991 "claimed": false, 00:10:04.991 "zoned": false, 00:10:04.991 "supported_io_types": { 00:10:04.991 "read": true, 00:10:04.991 "write": true, 00:10:04.991 "unmap": true, 00:10:04.991 "flush": true, 00:10:04.991 "reset": true, 00:10:04.991 "nvme_admin": false, 00:10:04.991 "nvme_io": false, 00:10:04.991 "nvme_io_md": false, 00:10:04.991 "write_zeroes": true, 00:10:04.991 "zcopy": true, 00:10:04.991 "get_zone_info": false, 00:10:04.991 "zone_management": false, 00:10:04.991 "zone_append": false, 00:10:04.991 "compare": false, 00:10:04.991 "compare_and_write": false, 00:10:04.991 "abort": true, 00:10:04.991 "seek_hole": false, 00:10:04.991 "seek_data": false, 00:10:04.991 "copy": true, 00:10:04.991 "nvme_iov_md": false 00:10:04.991 }, 00:10:04.991 "memory_domains": [ 00:10:04.991 { 00:10:04.991 "dma_device_id": "system", 00:10:04.991 "dma_device_type": 1 00:10:04.991 }, 00:10:04.991 { 00:10:04.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.991 "dma_device_type": 2 00:10:04.991 } 00:10:04.991 ], 00:10:04.991 "driver_specific": {} 00:10:04.991 } 00:10:04.991 ] 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.991 BaseBdev4 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.991 [ 00:10:04.991 { 00:10:04.991 "name": "BaseBdev4", 00:10:04.991 "aliases": [ 00:10:04.991 "f153352c-1a2f-4597-83bc-5ca4db1f82f3" 00:10:04.991 ], 00:10:04.991 "product_name": "Malloc disk", 00:10:04.991 "block_size": 512, 00:10:04.991 "num_blocks": 65536, 00:10:04.991 "uuid": "f153352c-1a2f-4597-83bc-5ca4db1f82f3", 00:10:04.991 "assigned_rate_limits": { 00:10:04.991 "rw_ios_per_sec": 0, 00:10:04.991 "rw_mbytes_per_sec": 0, 00:10:04.991 "r_mbytes_per_sec": 0, 00:10:04.991 "w_mbytes_per_sec": 0 00:10:04.991 }, 00:10:04.991 "claimed": false, 00:10:04.991 "zoned": false, 00:10:04.991 "supported_io_types": { 00:10:04.991 "read": true, 00:10:04.991 "write": true, 00:10:04.991 "unmap": true, 00:10:04.991 "flush": true, 00:10:04.991 "reset": true, 00:10:04.991 "nvme_admin": false, 00:10:04.991 "nvme_io": false, 00:10:04.991 "nvme_io_md": false, 00:10:04.991 "write_zeroes": true, 00:10:04.991 "zcopy": true, 00:10:04.991 "get_zone_info": false, 00:10:04.991 "zone_management": false, 00:10:04.991 "zone_append": false, 00:10:04.991 "compare": false, 00:10:04.991 "compare_and_write": false, 00:10:04.991 "abort": true, 00:10:04.991 "seek_hole": false, 00:10:04.991 "seek_data": false, 00:10:04.991 "copy": true, 00:10:04.991 "nvme_iov_md": false 00:10:04.991 }, 00:10:04.991 "memory_domains": [ 00:10:04.991 { 00:10:04.991 "dma_device_id": "system", 00:10:04.991 "dma_device_type": 1 00:10:04.991 }, 00:10:04.991 { 00:10:04.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.991 "dma_device_type": 2 00:10:04.991 } 00:10:04.991 ], 00:10:04.991 "driver_specific": {} 00:10:04.991 } 00:10:04.991 ] 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.991 [2024-11-21 04:08:04.953795] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:04.991 [2024-11-21 04:08:04.953928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:04.991 [2024-11-21 04:08:04.954003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:04.991 [2024-11-21 04:08:04.956355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.991 [2024-11-21 04:08:04.956456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.991 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.251 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.251 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.251 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.251 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.251 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.251 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.251 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.251 04:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.251 04:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.251 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.251 "name": "Existed_Raid", 00:10:05.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.251 "strip_size_kb": 64, 00:10:05.251 "state": "configuring", 00:10:05.251 "raid_level": "raid0", 00:10:05.251 "superblock": false, 00:10:05.251 "num_base_bdevs": 4, 00:10:05.251 "num_base_bdevs_discovered": 3, 00:10:05.251 "num_base_bdevs_operational": 4, 00:10:05.251 "base_bdevs_list": [ 00:10:05.251 { 00:10:05.251 "name": "BaseBdev1", 00:10:05.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.251 "is_configured": false, 00:10:05.251 "data_offset": 0, 00:10:05.251 "data_size": 0 00:10:05.251 }, 00:10:05.251 { 00:10:05.251 "name": "BaseBdev2", 00:10:05.251 "uuid": "ac0c3b16-68b2-4b5c-ade6-abb47b841ac6", 00:10:05.251 "is_configured": true, 00:10:05.251 "data_offset": 0, 00:10:05.251 "data_size": 65536 00:10:05.251 }, 00:10:05.251 { 00:10:05.251 "name": "BaseBdev3", 00:10:05.251 "uuid": "f86cdfea-11bb-43fe-8bb2-f0214e49997c", 00:10:05.251 "is_configured": true, 00:10:05.251 "data_offset": 0, 00:10:05.251 "data_size": 65536 00:10:05.251 }, 00:10:05.251 { 00:10:05.251 "name": "BaseBdev4", 00:10:05.251 "uuid": "f153352c-1a2f-4597-83bc-5ca4db1f82f3", 00:10:05.251 "is_configured": true, 00:10:05.251 "data_offset": 0, 00:10:05.251 "data_size": 65536 00:10:05.251 } 00:10:05.251 ] 00:10:05.251 }' 00:10:05.251 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.251 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.511 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:05.511 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.511 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.511 [2024-11-21 04:08:05.373132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:05.511 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.511 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:05.511 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.511 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.511 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.511 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.511 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.511 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.511 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.511 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.511 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.511 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.511 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.511 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.511 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.511 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.511 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.511 "name": "Existed_Raid", 00:10:05.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.511 "strip_size_kb": 64, 00:10:05.511 "state": "configuring", 00:10:05.511 "raid_level": "raid0", 00:10:05.511 "superblock": false, 00:10:05.511 "num_base_bdevs": 4, 00:10:05.511 "num_base_bdevs_discovered": 2, 00:10:05.511 "num_base_bdevs_operational": 4, 00:10:05.511 "base_bdevs_list": [ 00:10:05.511 { 00:10:05.511 "name": "BaseBdev1", 00:10:05.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.511 "is_configured": false, 00:10:05.511 "data_offset": 0, 00:10:05.511 "data_size": 0 00:10:05.511 }, 00:10:05.511 { 00:10:05.511 "name": null, 00:10:05.511 "uuid": "ac0c3b16-68b2-4b5c-ade6-abb47b841ac6", 00:10:05.511 "is_configured": false, 00:10:05.511 "data_offset": 0, 00:10:05.511 "data_size": 65536 00:10:05.511 }, 00:10:05.511 { 00:10:05.511 "name": "BaseBdev3", 00:10:05.511 "uuid": "f86cdfea-11bb-43fe-8bb2-f0214e49997c", 00:10:05.511 "is_configured": true, 00:10:05.511 "data_offset": 0, 00:10:05.511 "data_size": 65536 00:10:05.511 }, 00:10:05.511 { 00:10:05.511 "name": "BaseBdev4", 00:10:05.511 "uuid": "f153352c-1a2f-4597-83bc-5ca4db1f82f3", 00:10:05.511 "is_configured": true, 00:10:05.511 "data_offset": 0, 00:10:05.511 "data_size": 65536 00:10:05.511 } 00:10:05.511 ] 00:10:05.511 }' 00:10:05.511 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.511 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.081 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.081 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:06.081 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.081 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.081 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.081 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:06.081 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:06.081 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.081 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.081 [2024-11-21 04:08:05.789965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.081 BaseBdev1 00:10:06.081 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.082 [ 00:10:06.082 { 00:10:06.082 "name": "BaseBdev1", 00:10:06.082 "aliases": [ 00:10:06.082 "25427426-60d4-4a8d-ba38-c4c59c61515b" 00:10:06.082 ], 00:10:06.082 "product_name": "Malloc disk", 00:10:06.082 "block_size": 512, 00:10:06.082 "num_blocks": 65536, 00:10:06.082 "uuid": "25427426-60d4-4a8d-ba38-c4c59c61515b", 00:10:06.082 "assigned_rate_limits": { 00:10:06.082 "rw_ios_per_sec": 0, 00:10:06.082 "rw_mbytes_per_sec": 0, 00:10:06.082 "r_mbytes_per_sec": 0, 00:10:06.082 "w_mbytes_per_sec": 0 00:10:06.082 }, 00:10:06.082 "claimed": true, 00:10:06.082 "claim_type": "exclusive_write", 00:10:06.082 "zoned": false, 00:10:06.082 "supported_io_types": { 00:10:06.082 "read": true, 00:10:06.082 "write": true, 00:10:06.082 "unmap": true, 00:10:06.082 "flush": true, 00:10:06.082 "reset": true, 00:10:06.082 "nvme_admin": false, 00:10:06.082 "nvme_io": false, 00:10:06.082 "nvme_io_md": false, 00:10:06.082 "write_zeroes": true, 00:10:06.082 "zcopy": true, 00:10:06.082 "get_zone_info": false, 00:10:06.082 "zone_management": false, 00:10:06.082 "zone_append": false, 00:10:06.082 "compare": false, 00:10:06.082 "compare_and_write": false, 00:10:06.082 "abort": true, 00:10:06.082 "seek_hole": false, 00:10:06.082 "seek_data": false, 00:10:06.082 "copy": true, 00:10:06.082 "nvme_iov_md": false 00:10:06.082 }, 00:10:06.082 "memory_domains": [ 00:10:06.082 { 00:10:06.082 "dma_device_id": "system", 00:10:06.082 "dma_device_type": 1 00:10:06.082 }, 00:10:06.082 { 00:10:06.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.082 "dma_device_type": 2 00:10:06.082 } 00:10:06.082 ], 00:10:06.082 "driver_specific": {} 00:10:06.082 } 00:10:06.082 ] 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.082 "name": "Existed_Raid", 00:10:06.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.082 "strip_size_kb": 64, 00:10:06.082 "state": "configuring", 00:10:06.082 "raid_level": "raid0", 00:10:06.082 "superblock": false, 00:10:06.082 "num_base_bdevs": 4, 00:10:06.082 "num_base_bdevs_discovered": 3, 00:10:06.082 "num_base_bdevs_operational": 4, 00:10:06.082 "base_bdevs_list": [ 00:10:06.082 { 00:10:06.082 "name": "BaseBdev1", 00:10:06.082 "uuid": "25427426-60d4-4a8d-ba38-c4c59c61515b", 00:10:06.082 "is_configured": true, 00:10:06.082 "data_offset": 0, 00:10:06.082 "data_size": 65536 00:10:06.082 }, 00:10:06.082 { 00:10:06.082 "name": null, 00:10:06.082 "uuid": "ac0c3b16-68b2-4b5c-ade6-abb47b841ac6", 00:10:06.082 "is_configured": false, 00:10:06.082 "data_offset": 0, 00:10:06.082 "data_size": 65536 00:10:06.082 }, 00:10:06.082 { 00:10:06.082 "name": "BaseBdev3", 00:10:06.082 "uuid": "f86cdfea-11bb-43fe-8bb2-f0214e49997c", 00:10:06.082 "is_configured": true, 00:10:06.082 "data_offset": 0, 00:10:06.082 "data_size": 65536 00:10:06.082 }, 00:10:06.082 { 00:10:06.082 "name": "BaseBdev4", 00:10:06.082 "uuid": "f153352c-1a2f-4597-83bc-5ca4db1f82f3", 00:10:06.082 "is_configured": true, 00:10:06.082 "data_offset": 0, 00:10:06.082 "data_size": 65536 00:10:06.082 } 00:10:06.082 ] 00:10:06.082 }' 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.082 04:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.342 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.342 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:06.342 04:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.342 04:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.342 04:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.342 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:06.342 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:06.342 04:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.342 04:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.342 [2024-11-21 04:08:06.285246] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:06.342 04:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.342 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:06.342 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.342 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.342 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.342 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.342 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.342 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.342 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.342 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.342 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.342 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.342 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.342 04:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.342 04:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.342 04:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.603 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.603 "name": "Existed_Raid", 00:10:06.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.603 "strip_size_kb": 64, 00:10:06.603 "state": "configuring", 00:10:06.603 "raid_level": "raid0", 00:10:06.603 "superblock": false, 00:10:06.603 "num_base_bdevs": 4, 00:10:06.603 "num_base_bdevs_discovered": 2, 00:10:06.603 "num_base_bdevs_operational": 4, 00:10:06.603 "base_bdevs_list": [ 00:10:06.603 { 00:10:06.603 "name": "BaseBdev1", 00:10:06.603 "uuid": "25427426-60d4-4a8d-ba38-c4c59c61515b", 00:10:06.603 "is_configured": true, 00:10:06.603 "data_offset": 0, 00:10:06.603 "data_size": 65536 00:10:06.603 }, 00:10:06.603 { 00:10:06.603 "name": null, 00:10:06.603 "uuid": "ac0c3b16-68b2-4b5c-ade6-abb47b841ac6", 00:10:06.603 "is_configured": false, 00:10:06.603 "data_offset": 0, 00:10:06.603 "data_size": 65536 00:10:06.603 }, 00:10:06.603 { 00:10:06.603 "name": null, 00:10:06.603 "uuid": "f86cdfea-11bb-43fe-8bb2-f0214e49997c", 00:10:06.603 "is_configured": false, 00:10:06.603 "data_offset": 0, 00:10:06.603 "data_size": 65536 00:10:06.603 }, 00:10:06.603 { 00:10:06.603 "name": "BaseBdev4", 00:10:06.603 "uuid": "f153352c-1a2f-4597-83bc-5ca4db1f82f3", 00:10:06.603 "is_configured": true, 00:10:06.603 "data_offset": 0, 00:10:06.603 "data_size": 65536 00:10:06.603 } 00:10:06.603 ] 00:10:06.603 }' 00:10:06.603 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.603 04:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.863 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:06.863 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.863 04:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.863 04:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.863 04:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.863 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:06.863 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:06.864 04:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.864 04:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.864 [2024-11-21 04:08:06.696529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:06.864 04:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.864 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:06.864 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.864 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.864 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.864 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.864 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.864 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.864 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.864 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.864 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.864 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.864 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.864 04:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.864 04:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.864 04:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.864 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.864 "name": "Existed_Raid", 00:10:06.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.864 "strip_size_kb": 64, 00:10:06.864 "state": "configuring", 00:10:06.864 "raid_level": "raid0", 00:10:06.864 "superblock": false, 00:10:06.864 "num_base_bdevs": 4, 00:10:06.864 "num_base_bdevs_discovered": 3, 00:10:06.864 "num_base_bdevs_operational": 4, 00:10:06.864 "base_bdevs_list": [ 00:10:06.864 { 00:10:06.864 "name": "BaseBdev1", 00:10:06.864 "uuid": "25427426-60d4-4a8d-ba38-c4c59c61515b", 00:10:06.864 "is_configured": true, 00:10:06.864 "data_offset": 0, 00:10:06.864 "data_size": 65536 00:10:06.864 }, 00:10:06.864 { 00:10:06.864 "name": null, 00:10:06.864 "uuid": "ac0c3b16-68b2-4b5c-ade6-abb47b841ac6", 00:10:06.864 "is_configured": false, 00:10:06.864 "data_offset": 0, 00:10:06.864 "data_size": 65536 00:10:06.864 }, 00:10:06.864 { 00:10:06.864 "name": "BaseBdev3", 00:10:06.864 "uuid": "f86cdfea-11bb-43fe-8bb2-f0214e49997c", 00:10:06.864 "is_configured": true, 00:10:06.864 "data_offset": 0, 00:10:06.864 "data_size": 65536 00:10:06.864 }, 00:10:06.864 { 00:10:06.864 "name": "BaseBdev4", 00:10:06.864 "uuid": "f153352c-1a2f-4597-83bc-5ca4db1f82f3", 00:10:06.864 "is_configured": true, 00:10:06.864 "data_offset": 0, 00:10:06.864 "data_size": 65536 00:10:06.864 } 00:10:06.864 ] 00:10:06.864 }' 00:10:06.864 04:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.864 04:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.432 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.432 04:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.432 04:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.432 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:07.432 04:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.432 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:07.432 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:07.432 04:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.432 04:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.432 [2024-11-21 04:08:07.147869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:07.432 04:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.432 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:07.432 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.432 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.432 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.432 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.432 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.433 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.433 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.433 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.433 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.433 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.433 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.433 04:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.433 04:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.433 04:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.433 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.433 "name": "Existed_Raid", 00:10:07.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.433 "strip_size_kb": 64, 00:10:07.433 "state": "configuring", 00:10:07.433 "raid_level": "raid0", 00:10:07.433 "superblock": false, 00:10:07.433 "num_base_bdevs": 4, 00:10:07.433 "num_base_bdevs_discovered": 2, 00:10:07.433 "num_base_bdevs_operational": 4, 00:10:07.433 "base_bdevs_list": [ 00:10:07.433 { 00:10:07.433 "name": null, 00:10:07.433 "uuid": "25427426-60d4-4a8d-ba38-c4c59c61515b", 00:10:07.433 "is_configured": false, 00:10:07.433 "data_offset": 0, 00:10:07.433 "data_size": 65536 00:10:07.433 }, 00:10:07.433 { 00:10:07.433 "name": null, 00:10:07.433 "uuid": "ac0c3b16-68b2-4b5c-ade6-abb47b841ac6", 00:10:07.433 "is_configured": false, 00:10:07.433 "data_offset": 0, 00:10:07.433 "data_size": 65536 00:10:07.433 }, 00:10:07.433 { 00:10:07.433 "name": "BaseBdev3", 00:10:07.433 "uuid": "f86cdfea-11bb-43fe-8bb2-f0214e49997c", 00:10:07.433 "is_configured": true, 00:10:07.433 "data_offset": 0, 00:10:07.433 "data_size": 65536 00:10:07.433 }, 00:10:07.433 { 00:10:07.433 "name": "BaseBdev4", 00:10:07.433 "uuid": "f153352c-1a2f-4597-83bc-5ca4db1f82f3", 00:10:07.433 "is_configured": true, 00:10:07.433 "data_offset": 0, 00:10:07.433 "data_size": 65536 00:10:07.433 } 00:10:07.433 ] 00:10:07.433 }' 00:10:07.433 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.433 04:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.693 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:07.693 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.693 04:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.693 04:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.693 04:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.693 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:07.693 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:07.693 04:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.693 04:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.693 [2024-11-21 04:08:07.659961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:07.952 04:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.952 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:07.952 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.952 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.952 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.952 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.952 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.952 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.952 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.952 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.952 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.952 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.952 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.952 04:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.952 04:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.952 04:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.952 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.952 "name": "Existed_Raid", 00:10:07.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.952 "strip_size_kb": 64, 00:10:07.952 "state": "configuring", 00:10:07.952 "raid_level": "raid0", 00:10:07.952 "superblock": false, 00:10:07.952 "num_base_bdevs": 4, 00:10:07.952 "num_base_bdevs_discovered": 3, 00:10:07.952 "num_base_bdevs_operational": 4, 00:10:07.952 "base_bdevs_list": [ 00:10:07.952 { 00:10:07.952 "name": null, 00:10:07.952 "uuid": "25427426-60d4-4a8d-ba38-c4c59c61515b", 00:10:07.952 "is_configured": false, 00:10:07.952 "data_offset": 0, 00:10:07.952 "data_size": 65536 00:10:07.952 }, 00:10:07.952 { 00:10:07.952 "name": "BaseBdev2", 00:10:07.952 "uuid": "ac0c3b16-68b2-4b5c-ade6-abb47b841ac6", 00:10:07.952 "is_configured": true, 00:10:07.952 "data_offset": 0, 00:10:07.952 "data_size": 65536 00:10:07.952 }, 00:10:07.952 { 00:10:07.952 "name": "BaseBdev3", 00:10:07.953 "uuid": "f86cdfea-11bb-43fe-8bb2-f0214e49997c", 00:10:07.953 "is_configured": true, 00:10:07.953 "data_offset": 0, 00:10:07.953 "data_size": 65536 00:10:07.953 }, 00:10:07.953 { 00:10:07.953 "name": "BaseBdev4", 00:10:07.953 "uuid": "f153352c-1a2f-4597-83bc-5ca4db1f82f3", 00:10:07.953 "is_configured": true, 00:10:07.953 "data_offset": 0, 00:10:07.953 "data_size": 65536 00:10:07.953 } 00:10:07.953 ] 00:10:07.953 }' 00:10:07.953 04:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.953 04:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.212 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:08.212 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.212 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.212 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.212 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.212 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:08.212 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.212 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.212 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.212 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:08.212 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.212 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 25427426-60d4-4a8d-ba38-c4c59c61515b 00:10:08.212 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.212 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.212 [2024-11-21 04:08:08.172771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:08.212 [2024-11-21 04:08:08.172897] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:08.212 [2024-11-21 04:08:08.172923] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:08.212 [2024-11-21 04:08:08.173304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:08.212 [2024-11-21 04:08:08.173504] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:08.212 [2024-11-21 04:08:08.173521] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:08.212 [2024-11-21 04:08:08.173770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.212 NewBaseBdev 00:10:08.212 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.212 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:08.213 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:08.213 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:08.213 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:08.213 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:08.213 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:08.213 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:08.213 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.213 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.473 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.473 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:08.473 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.473 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.473 [ 00:10:08.473 { 00:10:08.473 "name": "NewBaseBdev", 00:10:08.473 "aliases": [ 00:10:08.473 "25427426-60d4-4a8d-ba38-c4c59c61515b" 00:10:08.473 ], 00:10:08.473 "product_name": "Malloc disk", 00:10:08.473 "block_size": 512, 00:10:08.473 "num_blocks": 65536, 00:10:08.473 "uuid": "25427426-60d4-4a8d-ba38-c4c59c61515b", 00:10:08.473 "assigned_rate_limits": { 00:10:08.473 "rw_ios_per_sec": 0, 00:10:08.473 "rw_mbytes_per_sec": 0, 00:10:08.473 "r_mbytes_per_sec": 0, 00:10:08.473 "w_mbytes_per_sec": 0 00:10:08.473 }, 00:10:08.473 "claimed": true, 00:10:08.473 "claim_type": "exclusive_write", 00:10:08.473 "zoned": false, 00:10:08.473 "supported_io_types": { 00:10:08.473 "read": true, 00:10:08.473 "write": true, 00:10:08.473 "unmap": true, 00:10:08.473 "flush": true, 00:10:08.473 "reset": true, 00:10:08.473 "nvme_admin": false, 00:10:08.473 "nvme_io": false, 00:10:08.473 "nvme_io_md": false, 00:10:08.473 "write_zeroes": true, 00:10:08.473 "zcopy": true, 00:10:08.473 "get_zone_info": false, 00:10:08.473 "zone_management": false, 00:10:08.473 "zone_append": false, 00:10:08.473 "compare": false, 00:10:08.473 "compare_and_write": false, 00:10:08.473 "abort": true, 00:10:08.473 "seek_hole": false, 00:10:08.473 "seek_data": false, 00:10:08.473 "copy": true, 00:10:08.473 "nvme_iov_md": false 00:10:08.473 }, 00:10:08.473 "memory_domains": [ 00:10:08.473 { 00:10:08.473 "dma_device_id": "system", 00:10:08.473 "dma_device_type": 1 00:10:08.473 }, 00:10:08.473 { 00:10:08.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.473 "dma_device_type": 2 00:10:08.473 } 00:10:08.473 ], 00:10:08.473 "driver_specific": {} 00:10:08.473 } 00:10:08.473 ] 00:10:08.473 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.473 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:08.473 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:08.473 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.473 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.473 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.473 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.473 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.473 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.473 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.473 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.473 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.473 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.473 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.473 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.473 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.473 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.473 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.473 "name": "Existed_Raid", 00:10:08.473 "uuid": "e0104224-ef5d-4866-a470-549abb301c56", 00:10:08.473 "strip_size_kb": 64, 00:10:08.473 "state": "online", 00:10:08.473 "raid_level": "raid0", 00:10:08.473 "superblock": false, 00:10:08.473 "num_base_bdevs": 4, 00:10:08.473 "num_base_bdevs_discovered": 4, 00:10:08.473 "num_base_bdevs_operational": 4, 00:10:08.473 "base_bdevs_list": [ 00:10:08.473 { 00:10:08.473 "name": "NewBaseBdev", 00:10:08.473 "uuid": "25427426-60d4-4a8d-ba38-c4c59c61515b", 00:10:08.473 "is_configured": true, 00:10:08.473 "data_offset": 0, 00:10:08.473 "data_size": 65536 00:10:08.473 }, 00:10:08.473 { 00:10:08.473 "name": "BaseBdev2", 00:10:08.473 "uuid": "ac0c3b16-68b2-4b5c-ade6-abb47b841ac6", 00:10:08.473 "is_configured": true, 00:10:08.473 "data_offset": 0, 00:10:08.473 "data_size": 65536 00:10:08.473 }, 00:10:08.473 { 00:10:08.473 "name": "BaseBdev3", 00:10:08.473 "uuid": "f86cdfea-11bb-43fe-8bb2-f0214e49997c", 00:10:08.473 "is_configured": true, 00:10:08.473 "data_offset": 0, 00:10:08.473 "data_size": 65536 00:10:08.473 }, 00:10:08.473 { 00:10:08.473 "name": "BaseBdev4", 00:10:08.473 "uuid": "f153352c-1a2f-4597-83bc-5ca4db1f82f3", 00:10:08.473 "is_configured": true, 00:10:08.473 "data_offset": 0, 00:10:08.473 "data_size": 65536 00:10:08.473 } 00:10:08.473 ] 00:10:08.473 }' 00:10:08.473 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.473 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.733 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:08.733 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:08.733 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:08.733 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:08.733 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:08.733 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:08.733 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:08.733 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:08.733 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.734 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.734 [2024-11-21 04:08:08.604497] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.734 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.734 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:08.734 "name": "Existed_Raid", 00:10:08.734 "aliases": [ 00:10:08.734 "e0104224-ef5d-4866-a470-549abb301c56" 00:10:08.734 ], 00:10:08.734 "product_name": "Raid Volume", 00:10:08.734 "block_size": 512, 00:10:08.734 "num_blocks": 262144, 00:10:08.734 "uuid": "e0104224-ef5d-4866-a470-549abb301c56", 00:10:08.734 "assigned_rate_limits": { 00:10:08.734 "rw_ios_per_sec": 0, 00:10:08.734 "rw_mbytes_per_sec": 0, 00:10:08.734 "r_mbytes_per_sec": 0, 00:10:08.734 "w_mbytes_per_sec": 0 00:10:08.734 }, 00:10:08.734 "claimed": false, 00:10:08.734 "zoned": false, 00:10:08.734 "supported_io_types": { 00:10:08.734 "read": true, 00:10:08.734 "write": true, 00:10:08.734 "unmap": true, 00:10:08.734 "flush": true, 00:10:08.734 "reset": true, 00:10:08.734 "nvme_admin": false, 00:10:08.734 "nvme_io": false, 00:10:08.734 "nvme_io_md": false, 00:10:08.734 "write_zeroes": true, 00:10:08.734 "zcopy": false, 00:10:08.734 "get_zone_info": false, 00:10:08.734 "zone_management": false, 00:10:08.734 "zone_append": false, 00:10:08.734 "compare": false, 00:10:08.734 "compare_and_write": false, 00:10:08.734 "abort": false, 00:10:08.734 "seek_hole": false, 00:10:08.734 "seek_data": false, 00:10:08.734 "copy": false, 00:10:08.734 "nvme_iov_md": false 00:10:08.734 }, 00:10:08.734 "memory_domains": [ 00:10:08.734 { 00:10:08.734 "dma_device_id": "system", 00:10:08.734 "dma_device_type": 1 00:10:08.734 }, 00:10:08.734 { 00:10:08.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.734 "dma_device_type": 2 00:10:08.734 }, 00:10:08.734 { 00:10:08.734 "dma_device_id": "system", 00:10:08.734 "dma_device_type": 1 00:10:08.734 }, 00:10:08.734 { 00:10:08.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.734 "dma_device_type": 2 00:10:08.734 }, 00:10:08.734 { 00:10:08.734 "dma_device_id": "system", 00:10:08.734 "dma_device_type": 1 00:10:08.734 }, 00:10:08.734 { 00:10:08.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.734 "dma_device_type": 2 00:10:08.734 }, 00:10:08.734 { 00:10:08.734 "dma_device_id": "system", 00:10:08.734 "dma_device_type": 1 00:10:08.734 }, 00:10:08.734 { 00:10:08.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.734 "dma_device_type": 2 00:10:08.734 } 00:10:08.734 ], 00:10:08.734 "driver_specific": { 00:10:08.734 "raid": { 00:10:08.734 "uuid": "e0104224-ef5d-4866-a470-549abb301c56", 00:10:08.734 "strip_size_kb": 64, 00:10:08.734 "state": "online", 00:10:08.734 "raid_level": "raid0", 00:10:08.734 "superblock": false, 00:10:08.734 "num_base_bdevs": 4, 00:10:08.734 "num_base_bdevs_discovered": 4, 00:10:08.734 "num_base_bdevs_operational": 4, 00:10:08.734 "base_bdevs_list": [ 00:10:08.734 { 00:10:08.734 "name": "NewBaseBdev", 00:10:08.734 "uuid": "25427426-60d4-4a8d-ba38-c4c59c61515b", 00:10:08.734 "is_configured": true, 00:10:08.734 "data_offset": 0, 00:10:08.734 "data_size": 65536 00:10:08.734 }, 00:10:08.734 { 00:10:08.734 "name": "BaseBdev2", 00:10:08.734 "uuid": "ac0c3b16-68b2-4b5c-ade6-abb47b841ac6", 00:10:08.734 "is_configured": true, 00:10:08.734 "data_offset": 0, 00:10:08.734 "data_size": 65536 00:10:08.734 }, 00:10:08.734 { 00:10:08.734 "name": "BaseBdev3", 00:10:08.734 "uuid": "f86cdfea-11bb-43fe-8bb2-f0214e49997c", 00:10:08.734 "is_configured": true, 00:10:08.734 "data_offset": 0, 00:10:08.734 "data_size": 65536 00:10:08.734 }, 00:10:08.734 { 00:10:08.734 "name": "BaseBdev4", 00:10:08.734 "uuid": "f153352c-1a2f-4597-83bc-5ca4db1f82f3", 00:10:08.734 "is_configured": true, 00:10:08.734 "data_offset": 0, 00:10:08.734 "data_size": 65536 00:10:08.734 } 00:10:08.734 ] 00:10:08.734 } 00:10:08.734 } 00:10:08.734 }' 00:10:08.734 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:08.734 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:08.734 BaseBdev2 00:10:08.734 BaseBdev3 00:10:08.734 BaseBdev4' 00:10:08.734 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.995 [2024-11-21 04:08:08.883670] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:08.995 [2024-11-21 04:08:08.883712] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:08.995 [2024-11-21 04:08:08.883826] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.995 [2024-11-21 04:08:08.883908] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:08.995 [2024-11-21 04:08:08.883920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80375 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80375 ']' 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80375 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80375 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80375' 00:10:08.995 killing process with pid 80375 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 80375 00:10:08.995 [2024-11-21 04:08:08.921807] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:08.995 04:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 80375 00:10:09.256 [2024-11-21 04:08:09.004954] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:09.514 04:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:09.514 00:10:09.514 real 0m9.505s 00:10:09.514 user 0m15.789s 00:10:09.514 sys 0m2.111s 00:10:09.514 04:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.514 ************************************ 00:10:09.514 END TEST raid_state_function_test 00:10:09.514 ************************************ 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.515 04:08:09 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:09.515 04:08:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:09.515 04:08:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.515 04:08:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:09.515 ************************************ 00:10:09.515 START TEST raid_state_function_test_sb 00:10:09.515 ************************************ 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81030 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81030' 00:10:09.515 Process raid pid: 81030 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81030 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81030 ']' 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.515 04:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.774 [2024-11-21 04:08:09.510658] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:10:09.774 [2024-11-21 04:08:09.510808] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.774 [2024-11-21 04:08:09.670392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.774 [2024-11-21 04:08:09.710516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.034 [2024-11-21 04:08:09.786914] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.034 [2024-11-21 04:08:09.787055] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.632 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.632 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:10.632 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:10.632 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.632 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.632 [2024-11-21 04:08:10.354559] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:10.632 [2024-11-21 04:08:10.354720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:10.632 [2024-11-21 04:08:10.354751] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:10.632 [2024-11-21 04:08:10.354776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:10.632 [2024-11-21 04:08:10.354794] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:10.632 [2024-11-21 04:08:10.354820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:10.632 [2024-11-21 04:08:10.354838] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:10.632 [2024-11-21 04:08:10.354876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:10.632 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.632 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:10.632 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.632 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.632 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.632 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.632 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.632 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.632 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.632 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.632 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.632 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.632 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.632 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.632 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.632 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.632 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.632 "name": "Existed_Raid", 00:10:10.632 "uuid": "252fc40f-a352-4f2a-8386-6e6cdaa39c9b", 00:10:10.632 "strip_size_kb": 64, 00:10:10.632 "state": "configuring", 00:10:10.632 "raid_level": "raid0", 00:10:10.632 "superblock": true, 00:10:10.632 "num_base_bdevs": 4, 00:10:10.632 "num_base_bdevs_discovered": 0, 00:10:10.632 "num_base_bdevs_operational": 4, 00:10:10.632 "base_bdevs_list": [ 00:10:10.632 { 00:10:10.632 "name": "BaseBdev1", 00:10:10.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.632 "is_configured": false, 00:10:10.632 "data_offset": 0, 00:10:10.632 "data_size": 0 00:10:10.632 }, 00:10:10.632 { 00:10:10.632 "name": "BaseBdev2", 00:10:10.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.632 "is_configured": false, 00:10:10.632 "data_offset": 0, 00:10:10.632 "data_size": 0 00:10:10.632 }, 00:10:10.632 { 00:10:10.632 "name": "BaseBdev3", 00:10:10.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.632 "is_configured": false, 00:10:10.632 "data_offset": 0, 00:10:10.632 "data_size": 0 00:10:10.632 }, 00:10:10.632 { 00:10:10.632 "name": "BaseBdev4", 00:10:10.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.632 "is_configured": false, 00:10:10.632 "data_offset": 0, 00:10:10.632 "data_size": 0 00:10:10.632 } 00:10:10.632 ] 00:10:10.632 }' 00:10:10.632 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.632 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.893 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:10.893 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.893 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.893 [2024-11-21 04:08:10.777675] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:10.893 [2024-11-21 04:08:10.777831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:10.893 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.893 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:10.893 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.893 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.893 [2024-11-21 04:08:10.789650] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:10.893 [2024-11-21 04:08:10.789740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:10.893 [2024-11-21 04:08:10.789768] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:10.893 [2024-11-21 04:08:10.789781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:10.893 [2024-11-21 04:08:10.789788] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:10.893 [2024-11-21 04:08:10.789797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:10.893 [2024-11-21 04:08:10.789803] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:10.893 [2024-11-21 04:08:10.789813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:10.893 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.893 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:10.893 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.893 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.893 [2024-11-21 04:08:10.816778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.893 BaseBdev1 00:10:10.893 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.893 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:10.893 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:10.893 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.893 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:10.893 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.893 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.893 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.893 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.893 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.893 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.893 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:10.893 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.893 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.893 [ 00:10:10.893 { 00:10:10.893 "name": "BaseBdev1", 00:10:10.893 "aliases": [ 00:10:10.893 "1af31676-71db-4aa8-9675-542e6cdba126" 00:10:10.893 ], 00:10:10.893 "product_name": "Malloc disk", 00:10:10.893 "block_size": 512, 00:10:10.893 "num_blocks": 65536, 00:10:10.893 "uuid": "1af31676-71db-4aa8-9675-542e6cdba126", 00:10:10.893 "assigned_rate_limits": { 00:10:10.893 "rw_ios_per_sec": 0, 00:10:10.893 "rw_mbytes_per_sec": 0, 00:10:10.893 "r_mbytes_per_sec": 0, 00:10:10.893 "w_mbytes_per_sec": 0 00:10:10.893 }, 00:10:10.893 "claimed": true, 00:10:10.893 "claim_type": "exclusive_write", 00:10:10.893 "zoned": false, 00:10:10.893 "supported_io_types": { 00:10:10.893 "read": true, 00:10:10.893 "write": true, 00:10:10.893 "unmap": true, 00:10:10.893 "flush": true, 00:10:10.893 "reset": true, 00:10:10.893 "nvme_admin": false, 00:10:10.893 "nvme_io": false, 00:10:10.893 "nvme_io_md": false, 00:10:10.893 "write_zeroes": true, 00:10:10.893 "zcopy": true, 00:10:10.893 "get_zone_info": false, 00:10:10.893 "zone_management": false, 00:10:10.893 "zone_append": false, 00:10:10.893 "compare": false, 00:10:10.893 "compare_and_write": false, 00:10:10.893 "abort": true, 00:10:10.893 "seek_hole": false, 00:10:10.894 "seek_data": false, 00:10:10.894 "copy": true, 00:10:10.894 "nvme_iov_md": false 00:10:10.894 }, 00:10:10.894 "memory_domains": [ 00:10:10.894 { 00:10:10.894 "dma_device_id": "system", 00:10:10.894 "dma_device_type": 1 00:10:10.894 }, 00:10:10.894 { 00:10:10.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.894 "dma_device_type": 2 00:10:10.894 } 00:10:10.894 ], 00:10:10.894 "driver_specific": {} 00:10:10.894 } 00:10:10.894 ] 00:10:10.894 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.894 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:10.894 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:10.894 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.894 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.894 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.894 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.894 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.894 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.894 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.894 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.894 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.153 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.153 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.153 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.153 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.153 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.153 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.153 "name": "Existed_Raid", 00:10:11.153 "uuid": "b8f417da-b058-44d2-a051-22d3c6050efe", 00:10:11.153 "strip_size_kb": 64, 00:10:11.153 "state": "configuring", 00:10:11.153 "raid_level": "raid0", 00:10:11.153 "superblock": true, 00:10:11.153 "num_base_bdevs": 4, 00:10:11.154 "num_base_bdevs_discovered": 1, 00:10:11.154 "num_base_bdevs_operational": 4, 00:10:11.154 "base_bdevs_list": [ 00:10:11.154 { 00:10:11.154 "name": "BaseBdev1", 00:10:11.154 "uuid": "1af31676-71db-4aa8-9675-542e6cdba126", 00:10:11.154 "is_configured": true, 00:10:11.154 "data_offset": 2048, 00:10:11.154 "data_size": 63488 00:10:11.154 }, 00:10:11.154 { 00:10:11.154 "name": "BaseBdev2", 00:10:11.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.154 "is_configured": false, 00:10:11.154 "data_offset": 0, 00:10:11.154 "data_size": 0 00:10:11.154 }, 00:10:11.154 { 00:10:11.154 "name": "BaseBdev3", 00:10:11.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.154 "is_configured": false, 00:10:11.154 "data_offset": 0, 00:10:11.154 "data_size": 0 00:10:11.154 }, 00:10:11.154 { 00:10:11.154 "name": "BaseBdev4", 00:10:11.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.154 "is_configured": false, 00:10:11.154 "data_offset": 0, 00:10:11.154 "data_size": 0 00:10:11.154 } 00:10:11.154 ] 00:10:11.154 }' 00:10:11.154 04:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.154 04:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.414 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:11.414 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.414 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.414 [2024-11-21 04:08:11.272169] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:11.414 [2024-11-21 04:08:11.272273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:11.414 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.414 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:11.414 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.414 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.414 [2024-11-21 04:08:11.284173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:11.414 [2024-11-21 04:08:11.286491] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:11.414 [2024-11-21 04:08:11.286534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:11.414 [2024-11-21 04:08:11.286544] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:11.414 [2024-11-21 04:08:11.286552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:11.414 [2024-11-21 04:08:11.286558] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:11.414 [2024-11-21 04:08:11.286566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:11.414 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.414 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:11.414 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:11.414 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:11.414 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.414 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.414 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.414 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.414 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.414 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.414 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.414 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.414 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.414 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.414 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.414 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.414 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.414 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.414 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.414 "name": "Existed_Raid", 00:10:11.414 "uuid": "40a49e93-e287-4aa4-a1e7-f5851c30cb41", 00:10:11.414 "strip_size_kb": 64, 00:10:11.414 "state": "configuring", 00:10:11.414 "raid_level": "raid0", 00:10:11.414 "superblock": true, 00:10:11.415 "num_base_bdevs": 4, 00:10:11.415 "num_base_bdevs_discovered": 1, 00:10:11.415 "num_base_bdevs_operational": 4, 00:10:11.415 "base_bdevs_list": [ 00:10:11.415 { 00:10:11.415 "name": "BaseBdev1", 00:10:11.415 "uuid": "1af31676-71db-4aa8-9675-542e6cdba126", 00:10:11.415 "is_configured": true, 00:10:11.415 "data_offset": 2048, 00:10:11.415 "data_size": 63488 00:10:11.415 }, 00:10:11.415 { 00:10:11.415 "name": "BaseBdev2", 00:10:11.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.415 "is_configured": false, 00:10:11.415 "data_offset": 0, 00:10:11.415 "data_size": 0 00:10:11.415 }, 00:10:11.415 { 00:10:11.415 "name": "BaseBdev3", 00:10:11.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.415 "is_configured": false, 00:10:11.415 "data_offset": 0, 00:10:11.415 "data_size": 0 00:10:11.415 }, 00:10:11.415 { 00:10:11.415 "name": "BaseBdev4", 00:10:11.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.415 "is_configured": false, 00:10:11.415 "data_offset": 0, 00:10:11.415 "data_size": 0 00:10:11.415 } 00:10:11.415 ] 00:10:11.415 }' 00:10:11.415 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.415 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.986 [2024-11-21 04:08:11.780886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:11.986 BaseBdev2 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.986 [ 00:10:11.986 { 00:10:11.986 "name": "BaseBdev2", 00:10:11.986 "aliases": [ 00:10:11.986 "1d0fcac2-5333-46da-920d-7f0e7c41c533" 00:10:11.986 ], 00:10:11.986 "product_name": "Malloc disk", 00:10:11.986 "block_size": 512, 00:10:11.986 "num_blocks": 65536, 00:10:11.986 "uuid": "1d0fcac2-5333-46da-920d-7f0e7c41c533", 00:10:11.986 "assigned_rate_limits": { 00:10:11.986 "rw_ios_per_sec": 0, 00:10:11.986 "rw_mbytes_per_sec": 0, 00:10:11.986 "r_mbytes_per_sec": 0, 00:10:11.986 "w_mbytes_per_sec": 0 00:10:11.986 }, 00:10:11.986 "claimed": true, 00:10:11.986 "claim_type": "exclusive_write", 00:10:11.986 "zoned": false, 00:10:11.986 "supported_io_types": { 00:10:11.986 "read": true, 00:10:11.986 "write": true, 00:10:11.986 "unmap": true, 00:10:11.986 "flush": true, 00:10:11.986 "reset": true, 00:10:11.986 "nvme_admin": false, 00:10:11.986 "nvme_io": false, 00:10:11.986 "nvme_io_md": false, 00:10:11.986 "write_zeroes": true, 00:10:11.986 "zcopy": true, 00:10:11.986 "get_zone_info": false, 00:10:11.986 "zone_management": false, 00:10:11.986 "zone_append": false, 00:10:11.986 "compare": false, 00:10:11.986 "compare_and_write": false, 00:10:11.986 "abort": true, 00:10:11.986 "seek_hole": false, 00:10:11.986 "seek_data": false, 00:10:11.986 "copy": true, 00:10:11.986 "nvme_iov_md": false 00:10:11.986 }, 00:10:11.986 "memory_domains": [ 00:10:11.986 { 00:10:11.986 "dma_device_id": "system", 00:10:11.986 "dma_device_type": 1 00:10:11.986 }, 00:10:11.986 { 00:10:11.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.986 "dma_device_type": 2 00:10:11.986 } 00:10:11.986 ], 00:10:11.986 "driver_specific": {} 00:10:11.986 } 00:10:11.986 ] 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.986 "name": "Existed_Raid", 00:10:11.986 "uuid": "40a49e93-e287-4aa4-a1e7-f5851c30cb41", 00:10:11.986 "strip_size_kb": 64, 00:10:11.986 "state": "configuring", 00:10:11.986 "raid_level": "raid0", 00:10:11.986 "superblock": true, 00:10:11.986 "num_base_bdevs": 4, 00:10:11.986 "num_base_bdevs_discovered": 2, 00:10:11.986 "num_base_bdevs_operational": 4, 00:10:11.986 "base_bdevs_list": [ 00:10:11.986 { 00:10:11.986 "name": "BaseBdev1", 00:10:11.986 "uuid": "1af31676-71db-4aa8-9675-542e6cdba126", 00:10:11.986 "is_configured": true, 00:10:11.986 "data_offset": 2048, 00:10:11.986 "data_size": 63488 00:10:11.986 }, 00:10:11.986 { 00:10:11.986 "name": "BaseBdev2", 00:10:11.986 "uuid": "1d0fcac2-5333-46da-920d-7f0e7c41c533", 00:10:11.986 "is_configured": true, 00:10:11.986 "data_offset": 2048, 00:10:11.986 "data_size": 63488 00:10:11.986 }, 00:10:11.986 { 00:10:11.986 "name": "BaseBdev3", 00:10:11.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.986 "is_configured": false, 00:10:11.986 "data_offset": 0, 00:10:11.986 "data_size": 0 00:10:11.986 }, 00:10:11.986 { 00:10:11.986 "name": "BaseBdev4", 00:10:11.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.986 "is_configured": false, 00:10:11.986 "data_offset": 0, 00:10:11.986 "data_size": 0 00:10:11.986 } 00:10:11.986 ] 00:10:11.986 }' 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.986 04:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.247 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:12.247 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.247 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.247 [2024-11-21 04:08:12.215635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:12.247 BaseBdev3 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.509 [ 00:10:12.509 { 00:10:12.509 "name": "BaseBdev3", 00:10:12.509 "aliases": [ 00:10:12.509 "b2930be4-2d31-4777-85cf-3999f4f479c8" 00:10:12.509 ], 00:10:12.509 "product_name": "Malloc disk", 00:10:12.509 "block_size": 512, 00:10:12.509 "num_blocks": 65536, 00:10:12.509 "uuid": "b2930be4-2d31-4777-85cf-3999f4f479c8", 00:10:12.509 "assigned_rate_limits": { 00:10:12.509 "rw_ios_per_sec": 0, 00:10:12.509 "rw_mbytes_per_sec": 0, 00:10:12.509 "r_mbytes_per_sec": 0, 00:10:12.509 "w_mbytes_per_sec": 0 00:10:12.509 }, 00:10:12.509 "claimed": true, 00:10:12.509 "claim_type": "exclusive_write", 00:10:12.509 "zoned": false, 00:10:12.509 "supported_io_types": { 00:10:12.509 "read": true, 00:10:12.509 "write": true, 00:10:12.509 "unmap": true, 00:10:12.509 "flush": true, 00:10:12.509 "reset": true, 00:10:12.509 "nvme_admin": false, 00:10:12.509 "nvme_io": false, 00:10:12.509 "nvme_io_md": false, 00:10:12.509 "write_zeroes": true, 00:10:12.509 "zcopy": true, 00:10:12.509 "get_zone_info": false, 00:10:12.509 "zone_management": false, 00:10:12.509 "zone_append": false, 00:10:12.509 "compare": false, 00:10:12.509 "compare_and_write": false, 00:10:12.509 "abort": true, 00:10:12.509 "seek_hole": false, 00:10:12.509 "seek_data": false, 00:10:12.509 "copy": true, 00:10:12.509 "nvme_iov_md": false 00:10:12.509 }, 00:10:12.509 "memory_domains": [ 00:10:12.509 { 00:10:12.509 "dma_device_id": "system", 00:10:12.509 "dma_device_type": 1 00:10:12.509 }, 00:10:12.509 { 00:10:12.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.509 "dma_device_type": 2 00:10:12.509 } 00:10:12.509 ], 00:10:12.509 "driver_specific": {} 00:10:12.509 } 00:10:12.509 ] 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.509 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.509 "name": "Existed_Raid", 00:10:12.509 "uuid": "40a49e93-e287-4aa4-a1e7-f5851c30cb41", 00:10:12.509 "strip_size_kb": 64, 00:10:12.509 "state": "configuring", 00:10:12.509 "raid_level": "raid0", 00:10:12.509 "superblock": true, 00:10:12.510 "num_base_bdevs": 4, 00:10:12.510 "num_base_bdevs_discovered": 3, 00:10:12.510 "num_base_bdevs_operational": 4, 00:10:12.510 "base_bdevs_list": [ 00:10:12.510 { 00:10:12.510 "name": "BaseBdev1", 00:10:12.510 "uuid": "1af31676-71db-4aa8-9675-542e6cdba126", 00:10:12.510 "is_configured": true, 00:10:12.510 "data_offset": 2048, 00:10:12.510 "data_size": 63488 00:10:12.510 }, 00:10:12.510 { 00:10:12.510 "name": "BaseBdev2", 00:10:12.510 "uuid": "1d0fcac2-5333-46da-920d-7f0e7c41c533", 00:10:12.510 "is_configured": true, 00:10:12.510 "data_offset": 2048, 00:10:12.510 "data_size": 63488 00:10:12.510 }, 00:10:12.510 { 00:10:12.510 "name": "BaseBdev3", 00:10:12.510 "uuid": "b2930be4-2d31-4777-85cf-3999f4f479c8", 00:10:12.510 "is_configured": true, 00:10:12.510 "data_offset": 2048, 00:10:12.510 "data_size": 63488 00:10:12.510 }, 00:10:12.510 { 00:10:12.510 "name": "BaseBdev4", 00:10:12.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.510 "is_configured": false, 00:10:12.510 "data_offset": 0, 00:10:12.510 "data_size": 0 00:10:12.510 } 00:10:12.510 ] 00:10:12.510 }' 00:10:12.510 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.510 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.771 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:12.771 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.771 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.771 [2024-11-21 04:08:12.652878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:12.771 [2024-11-21 04:08:12.653127] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:12.771 [2024-11-21 04:08:12.653151] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:12.771 [2024-11-21 04:08:12.653587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:12.771 [2024-11-21 04:08:12.653767] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:12.771 [2024-11-21 04:08:12.653788] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:12.771 BaseBdev4 00:10:12.771 [2024-11-21 04:08:12.653957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.771 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.771 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:12.771 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:12.771 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.771 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:12.771 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.771 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.771 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.771 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.771 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.772 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.772 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:12.772 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.772 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.772 [ 00:10:12.772 { 00:10:12.772 "name": "BaseBdev4", 00:10:12.772 "aliases": [ 00:10:12.772 "0868e684-f5de-4171-84e6-c199f1f332b0" 00:10:12.772 ], 00:10:12.772 "product_name": "Malloc disk", 00:10:12.772 "block_size": 512, 00:10:12.772 "num_blocks": 65536, 00:10:12.772 "uuid": "0868e684-f5de-4171-84e6-c199f1f332b0", 00:10:12.772 "assigned_rate_limits": { 00:10:12.772 "rw_ios_per_sec": 0, 00:10:12.772 "rw_mbytes_per_sec": 0, 00:10:12.772 "r_mbytes_per_sec": 0, 00:10:12.772 "w_mbytes_per_sec": 0 00:10:12.772 }, 00:10:12.772 "claimed": true, 00:10:12.772 "claim_type": "exclusive_write", 00:10:12.772 "zoned": false, 00:10:12.772 "supported_io_types": { 00:10:12.772 "read": true, 00:10:12.772 "write": true, 00:10:12.772 "unmap": true, 00:10:12.772 "flush": true, 00:10:12.772 "reset": true, 00:10:12.772 "nvme_admin": false, 00:10:12.772 "nvme_io": false, 00:10:12.772 "nvme_io_md": false, 00:10:12.772 "write_zeroes": true, 00:10:12.772 "zcopy": true, 00:10:12.772 "get_zone_info": false, 00:10:12.772 "zone_management": false, 00:10:12.772 "zone_append": false, 00:10:12.772 "compare": false, 00:10:12.772 "compare_and_write": false, 00:10:12.772 "abort": true, 00:10:12.772 "seek_hole": false, 00:10:12.772 "seek_data": false, 00:10:12.772 "copy": true, 00:10:12.772 "nvme_iov_md": false 00:10:12.772 }, 00:10:12.772 "memory_domains": [ 00:10:12.772 { 00:10:12.772 "dma_device_id": "system", 00:10:12.772 "dma_device_type": 1 00:10:12.772 }, 00:10:12.772 { 00:10:12.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.772 "dma_device_type": 2 00:10:12.772 } 00:10:12.772 ], 00:10:12.772 "driver_specific": {} 00:10:12.772 } 00:10:12.772 ] 00:10:12.772 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.772 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:12.772 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:12.772 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:12.772 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:12.772 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.772 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.772 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.772 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.772 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.772 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.772 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.772 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.772 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.772 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.772 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.772 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.772 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.772 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.772 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.772 "name": "Existed_Raid", 00:10:12.772 "uuid": "40a49e93-e287-4aa4-a1e7-f5851c30cb41", 00:10:12.772 "strip_size_kb": 64, 00:10:12.772 "state": "online", 00:10:12.772 "raid_level": "raid0", 00:10:12.772 "superblock": true, 00:10:12.772 "num_base_bdevs": 4, 00:10:12.772 "num_base_bdevs_discovered": 4, 00:10:12.772 "num_base_bdevs_operational": 4, 00:10:12.772 "base_bdevs_list": [ 00:10:12.772 { 00:10:12.772 "name": "BaseBdev1", 00:10:12.772 "uuid": "1af31676-71db-4aa8-9675-542e6cdba126", 00:10:12.772 "is_configured": true, 00:10:12.772 "data_offset": 2048, 00:10:12.772 "data_size": 63488 00:10:12.772 }, 00:10:12.772 { 00:10:12.772 "name": "BaseBdev2", 00:10:12.772 "uuid": "1d0fcac2-5333-46da-920d-7f0e7c41c533", 00:10:12.772 "is_configured": true, 00:10:12.772 "data_offset": 2048, 00:10:12.772 "data_size": 63488 00:10:12.772 }, 00:10:12.772 { 00:10:12.772 "name": "BaseBdev3", 00:10:12.772 "uuid": "b2930be4-2d31-4777-85cf-3999f4f479c8", 00:10:12.772 "is_configured": true, 00:10:12.772 "data_offset": 2048, 00:10:12.772 "data_size": 63488 00:10:12.772 }, 00:10:12.772 { 00:10:12.772 "name": "BaseBdev4", 00:10:12.772 "uuid": "0868e684-f5de-4171-84e6-c199f1f332b0", 00:10:12.772 "is_configured": true, 00:10:12.772 "data_offset": 2048, 00:10:12.772 "data_size": 63488 00:10:12.772 } 00:10:12.772 ] 00:10:12.772 }' 00:10:12.772 04:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.772 04:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.344 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:13.344 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:13.344 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:13.344 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:13.344 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:13.344 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:13.344 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:13.344 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:13.344 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.344 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.344 [2024-11-21 04:08:13.108668] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.344 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.344 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:13.344 "name": "Existed_Raid", 00:10:13.344 "aliases": [ 00:10:13.344 "40a49e93-e287-4aa4-a1e7-f5851c30cb41" 00:10:13.344 ], 00:10:13.344 "product_name": "Raid Volume", 00:10:13.344 "block_size": 512, 00:10:13.344 "num_blocks": 253952, 00:10:13.344 "uuid": "40a49e93-e287-4aa4-a1e7-f5851c30cb41", 00:10:13.344 "assigned_rate_limits": { 00:10:13.344 "rw_ios_per_sec": 0, 00:10:13.344 "rw_mbytes_per_sec": 0, 00:10:13.344 "r_mbytes_per_sec": 0, 00:10:13.344 "w_mbytes_per_sec": 0 00:10:13.344 }, 00:10:13.344 "claimed": false, 00:10:13.344 "zoned": false, 00:10:13.344 "supported_io_types": { 00:10:13.344 "read": true, 00:10:13.344 "write": true, 00:10:13.344 "unmap": true, 00:10:13.344 "flush": true, 00:10:13.344 "reset": true, 00:10:13.344 "nvme_admin": false, 00:10:13.344 "nvme_io": false, 00:10:13.344 "nvme_io_md": false, 00:10:13.344 "write_zeroes": true, 00:10:13.344 "zcopy": false, 00:10:13.344 "get_zone_info": false, 00:10:13.344 "zone_management": false, 00:10:13.344 "zone_append": false, 00:10:13.344 "compare": false, 00:10:13.344 "compare_and_write": false, 00:10:13.344 "abort": false, 00:10:13.344 "seek_hole": false, 00:10:13.344 "seek_data": false, 00:10:13.344 "copy": false, 00:10:13.344 "nvme_iov_md": false 00:10:13.344 }, 00:10:13.344 "memory_domains": [ 00:10:13.344 { 00:10:13.344 "dma_device_id": "system", 00:10:13.344 "dma_device_type": 1 00:10:13.344 }, 00:10:13.344 { 00:10:13.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.344 "dma_device_type": 2 00:10:13.344 }, 00:10:13.344 { 00:10:13.344 "dma_device_id": "system", 00:10:13.344 "dma_device_type": 1 00:10:13.344 }, 00:10:13.344 { 00:10:13.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.344 "dma_device_type": 2 00:10:13.344 }, 00:10:13.345 { 00:10:13.345 "dma_device_id": "system", 00:10:13.345 "dma_device_type": 1 00:10:13.345 }, 00:10:13.345 { 00:10:13.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.345 "dma_device_type": 2 00:10:13.345 }, 00:10:13.345 { 00:10:13.345 "dma_device_id": "system", 00:10:13.345 "dma_device_type": 1 00:10:13.345 }, 00:10:13.345 { 00:10:13.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.345 "dma_device_type": 2 00:10:13.345 } 00:10:13.345 ], 00:10:13.345 "driver_specific": { 00:10:13.345 "raid": { 00:10:13.345 "uuid": "40a49e93-e287-4aa4-a1e7-f5851c30cb41", 00:10:13.345 "strip_size_kb": 64, 00:10:13.345 "state": "online", 00:10:13.345 "raid_level": "raid0", 00:10:13.345 "superblock": true, 00:10:13.345 "num_base_bdevs": 4, 00:10:13.345 "num_base_bdevs_discovered": 4, 00:10:13.345 "num_base_bdevs_operational": 4, 00:10:13.345 "base_bdevs_list": [ 00:10:13.345 { 00:10:13.345 "name": "BaseBdev1", 00:10:13.345 "uuid": "1af31676-71db-4aa8-9675-542e6cdba126", 00:10:13.345 "is_configured": true, 00:10:13.345 "data_offset": 2048, 00:10:13.345 "data_size": 63488 00:10:13.345 }, 00:10:13.345 { 00:10:13.345 "name": "BaseBdev2", 00:10:13.345 "uuid": "1d0fcac2-5333-46da-920d-7f0e7c41c533", 00:10:13.345 "is_configured": true, 00:10:13.345 "data_offset": 2048, 00:10:13.345 "data_size": 63488 00:10:13.345 }, 00:10:13.345 { 00:10:13.345 "name": "BaseBdev3", 00:10:13.345 "uuid": "b2930be4-2d31-4777-85cf-3999f4f479c8", 00:10:13.345 "is_configured": true, 00:10:13.345 "data_offset": 2048, 00:10:13.345 "data_size": 63488 00:10:13.345 }, 00:10:13.345 { 00:10:13.345 "name": "BaseBdev4", 00:10:13.345 "uuid": "0868e684-f5de-4171-84e6-c199f1f332b0", 00:10:13.345 "is_configured": true, 00:10:13.345 "data_offset": 2048, 00:10:13.345 "data_size": 63488 00:10:13.345 } 00:10:13.345 ] 00:10:13.345 } 00:10:13.345 } 00:10:13.345 }' 00:10:13.345 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.345 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:13.345 BaseBdev2 00:10:13.345 BaseBdev3 00:10:13.345 BaseBdev4' 00:10:13.345 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.345 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:13.345 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.345 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.345 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:13.345 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.345 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.345 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.345 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.345 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.345 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.345 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.345 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:13.345 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.345 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.345 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.345 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.345 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.345 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.345 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:13.345 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.345 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.345 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.345 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.606 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.606 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.606 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.606 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.606 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:13.606 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.606 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.606 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.606 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.606 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.606 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:13.606 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.606 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.606 [2024-11-21 04:08:13.375847] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:13.606 [2024-11-21 04:08:13.375891] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:13.606 [2024-11-21 04:08:13.375956] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.606 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.606 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:13.606 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:13.606 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:13.606 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:13.606 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:13.606 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:13.606 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.606 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:13.606 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.606 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.606 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.607 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.607 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.607 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.607 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.607 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.607 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.607 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.607 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.607 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.607 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.607 "name": "Existed_Raid", 00:10:13.607 "uuid": "40a49e93-e287-4aa4-a1e7-f5851c30cb41", 00:10:13.607 "strip_size_kb": 64, 00:10:13.607 "state": "offline", 00:10:13.607 "raid_level": "raid0", 00:10:13.607 "superblock": true, 00:10:13.607 "num_base_bdevs": 4, 00:10:13.607 "num_base_bdevs_discovered": 3, 00:10:13.607 "num_base_bdevs_operational": 3, 00:10:13.607 "base_bdevs_list": [ 00:10:13.607 { 00:10:13.607 "name": null, 00:10:13.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.607 "is_configured": false, 00:10:13.607 "data_offset": 0, 00:10:13.607 "data_size": 63488 00:10:13.607 }, 00:10:13.607 { 00:10:13.607 "name": "BaseBdev2", 00:10:13.607 "uuid": "1d0fcac2-5333-46da-920d-7f0e7c41c533", 00:10:13.607 "is_configured": true, 00:10:13.607 "data_offset": 2048, 00:10:13.607 "data_size": 63488 00:10:13.607 }, 00:10:13.607 { 00:10:13.607 "name": "BaseBdev3", 00:10:13.607 "uuid": "b2930be4-2d31-4777-85cf-3999f4f479c8", 00:10:13.607 "is_configured": true, 00:10:13.607 "data_offset": 2048, 00:10:13.607 "data_size": 63488 00:10:13.607 }, 00:10:13.607 { 00:10:13.607 "name": "BaseBdev4", 00:10:13.607 "uuid": "0868e684-f5de-4171-84e6-c199f1f332b0", 00:10:13.607 "is_configured": true, 00:10:13.607 "data_offset": 2048, 00:10:13.607 "data_size": 63488 00:10:13.607 } 00:10:13.607 ] 00:10:13.607 }' 00:10:13.607 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.607 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.866 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:13.866 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:13.866 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.866 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.866 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.866 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:13.866 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.866 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:13.866 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:13.866 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:13.866 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.866 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.126 [2024-11-21 04:08:13.845184] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:14.126 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.126 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:14.126 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:14.126 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.126 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.126 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.126 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:14.126 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.126 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:14.126 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:14.126 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:14.126 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.126 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.126 [2024-11-21 04:08:13.922772] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:14.126 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.126 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:14.126 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:14.126 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.126 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:14.126 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.126 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.126 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.126 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:14.126 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:14.127 04:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:14.127 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.127 04:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.127 [2024-11-21 04:08:13.988335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:14.127 [2024-11-21 04:08:13.988399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.127 BaseBdev2 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.127 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.388 [ 00:10:14.388 { 00:10:14.388 "name": "BaseBdev2", 00:10:14.388 "aliases": [ 00:10:14.388 "f38993d4-852c-4135-8209-dcf5d59f40d3" 00:10:14.388 ], 00:10:14.388 "product_name": "Malloc disk", 00:10:14.388 "block_size": 512, 00:10:14.388 "num_blocks": 65536, 00:10:14.388 "uuid": "f38993d4-852c-4135-8209-dcf5d59f40d3", 00:10:14.388 "assigned_rate_limits": { 00:10:14.388 "rw_ios_per_sec": 0, 00:10:14.388 "rw_mbytes_per_sec": 0, 00:10:14.388 "r_mbytes_per_sec": 0, 00:10:14.388 "w_mbytes_per_sec": 0 00:10:14.388 }, 00:10:14.388 "claimed": false, 00:10:14.388 "zoned": false, 00:10:14.388 "supported_io_types": { 00:10:14.388 "read": true, 00:10:14.388 "write": true, 00:10:14.388 "unmap": true, 00:10:14.388 "flush": true, 00:10:14.388 "reset": true, 00:10:14.388 "nvme_admin": false, 00:10:14.388 "nvme_io": false, 00:10:14.388 "nvme_io_md": false, 00:10:14.388 "write_zeroes": true, 00:10:14.388 "zcopy": true, 00:10:14.388 "get_zone_info": false, 00:10:14.388 "zone_management": false, 00:10:14.388 "zone_append": false, 00:10:14.388 "compare": false, 00:10:14.388 "compare_and_write": false, 00:10:14.388 "abort": true, 00:10:14.388 "seek_hole": false, 00:10:14.388 "seek_data": false, 00:10:14.388 "copy": true, 00:10:14.388 "nvme_iov_md": false 00:10:14.388 }, 00:10:14.388 "memory_domains": [ 00:10:14.388 { 00:10:14.388 "dma_device_id": "system", 00:10:14.388 "dma_device_type": 1 00:10:14.388 }, 00:10:14.388 { 00:10:14.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.388 "dma_device_type": 2 00:10:14.388 } 00:10:14.388 ], 00:10:14.388 "driver_specific": {} 00:10:14.388 } 00:10:14.388 ] 00:10:14.388 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.388 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:14.388 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:14.388 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:14.388 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:14.388 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.388 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.388 BaseBdev3 00:10:14.388 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.388 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:14.388 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:14.388 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.389 [ 00:10:14.389 { 00:10:14.389 "name": "BaseBdev3", 00:10:14.389 "aliases": [ 00:10:14.389 "390d3d2d-0ac2-4113-9b6a-35d26afb8ec2" 00:10:14.389 ], 00:10:14.389 "product_name": "Malloc disk", 00:10:14.389 "block_size": 512, 00:10:14.389 "num_blocks": 65536, 00:10:14.389 "uuid": "390d3d2d-0ac2-4113-9b6a-35d26afb8ec2", 00:10:14.389 "assigned_rate_limits": { 00:10:14.389 "rw_ios_per_sec": 0, 00:10:14.389 "rw_mbytes_per_sec": 0, 00:10:14.389 "r_mbytes_per_sec": 0, 00:10:14.389 "w_mbytes_per_sec": 0 00:10:14.389 }, 00:10:14.389 "claimed": false, 00:10:14.389 "zoned": false, 00:10:14.389 "supported_io_types": { 00:10:14.389 "read": true, 00:10:14.389 "write": true, 00:10:14.389 "unmap": true, 00:10:14.389 "flush": true, 00:10:14.389 "reset": true, 00:10:14.389 "nvme_admin": false, 00:10:14.389 "nvme_io": false, 00:10:14.389 "nvme_io_md": false, 00:10:14.389 "write_zeroes": true, 00:10:14.389 "zcopy": true, 00:10:14.389 "get_zone_info": false, 00:10:14.389 "zone_management": false, 00:10:14.389 "zone_append": false, 00:10:14.389 "compare": false, 00:10:14.389 "compare_and_write": false, 00:10:14.389 "abort": true, 00:10:14.389 "seek_hole": false, 00:10:14.389 "seek_data": false, 00:10:14.389 "copy": true, 00:10:14.389 "nvme_iov_md": false 00:10:14.389 }, 00:10:14.389 "memory_domains": [ 00:10:14.389 { 00:10:14.389 "dma_device_id": "system", 00:10:14.389 "dma_device_type": 1 00:10:14.389 }, 00:10:14.389 { 00:10:14.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.389 "dma_device_type": 2 00:10:14.389 } 00:10:14.389 ], 00:10:14.389 "driver_specific": {} 00:10:14.389 } 00:10:14.389 ] 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.389 BaseBdev4 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.389 [ 00:10:14.389 { 00:10:14.389 "name": "BaseBdev4", 00:10:14.389 "aliases": [ 00:10:14.389 "97949e50-9c7f-4b06-ad5c-d1d9c76a3c96" 00:10:14.389 ], 00:10:14.389 "product_name": "Malloc disk", 00:10:14.389 "block_size": 512, 00:10:14.389 "num_blocks": 65536, 00:10:14.389 "uuid": "97949e50-9c7f-4b06-ad5c-d1d9c76a3c96", 00:10:14.389 "assigned_rate_limits": { 00:10:14.389 "rw_ios_per_sec": 0, 00:10:14.389 "rw_mbytes_per_sec": 0, 00:10:14.389 "r_mbytes_per_sec": 0, 00:10:14.389 "w_mbytes_per_sec": 0 00:10:14.389 }, 00:10:14.389 "claimed": false, 00:10:14.389 "zoned": false, 00:10:14.389 "supported_io_types": { 00:10:14.389 "read": true, 00:10:14.389 "write": true, 00:10:14.389 "unmap": true, 00:10:14.389 "flush": true, 00:10:14.389 "reset": true, 00:10:14.389 "nvme_admin": false, 00:10:14.389 "nvme_io": false, 00:10:14.389 "nvme_io_md": false, 00:10:14.389 "write_zeroes": true, 00:10:14.389 "zcopy": true, 00:10:14.389 "get_zone_info": false, 00:10:14.389 "zone_management": false, 00:10:14.389 "zone_append": false, 00:10:14.389 "compare": false, 00:10:14.389 "compare_and_write": false, 00:10:14.389 "abort": true, 00:10:14.389 "seek_hole": false, 00:10:14.389 "seek_data": false, 00:10:14.389 "copy": true, 00:10:14.389 "nvme_iov_md": false 00:10:14.389 }, 00:10:14.389 "memory_domains": [ 00:10:14.389 { 00:10:14.389 "dma_device_id": "system", 00:10:14.389 "dma_device_type": 1 00:10:14.389 }, 00:10:14.389 { 00:10:14.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.389 "dma_device_type": 2 00:10:14.389 } 00:10:14.389 ], 00:10:14.389 "driver_specific": {} 00:10:14.389 } 00:10:14.389 ] 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.389 [2024-11-21 04:08:14.241052] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:14.389 [2024-11-21 04:08:14.241105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:14.389 [2024-11-21 04:08:14.241146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:14.389 [2024-11-21 04:08:14.243445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:14.389 [2024-11-21 04:08:14.243497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.389 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.389 "name": "Existed_Raid", 00:10:14.389 "uuid": "7fff2b82-33e6-4ae8-8741-5c555d43d202", 00:10:14.389 "strip_size_kb": 64, 00:10:14.389 "state": "configuring", 00:10:14.389 "raid_level": "raid0", 00:10:14.389 "superblock": true, 00:10:14.389 "num_base_bdevs": 4, 00:10:14.389 "num_base_bdevs_discovered": 3, 00:10:14.389 "num_base_bdevs_operational": 4, 00:10:14.389 "base_bdevs_list": [ 00:10:14.389 { 00:10:14.389 "name": "BaseBdev1", 00:10:14.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.389 "is_configured": false, 00:10:14.389 "data_offset": 0, 00:10:14.389 "data_size": 0 00:10:14.389 }, 00:10:14.389 { 00:10:14.389 "name": "BaseBdev2", 00:10:14.389 "uuid": "f38993d4-852c-4135-8209-dcf5d59f40d3", 00:10:14.389 "is_configured": true, 00:10:14.389 "data_offset": 2048, 00:10:14.389 "data_size": 63488 00:10:14.389 }, 00:10:14.389 { 00:10:14.389 "name": "BaseBdev3", 00:10:14.389 "uuid": "390d3d2d-0ac2-4113-9b6a-35d26afb8ec2", 00:10:14.389 "is_configured": true, 00:10:14.389 "data_offset": 2048, 00:10:14.389 "data_size": 63488 00:10:14.389 }, 00:10:14.389 { 00:10:14.389 "name": "BaseBdev4", 00:10:14.389 "uuid": "97949e50-9c7f-4b06-ad5c-d1d9c76a3c96", 00:10:14.389 "is_configured": true, 00:10:14.389 "data_offset": 2048, 00:10:14.389 "data_size": 63488 00:10:14.389 } 00:10:14.390 ] 00:10:14.390 }' 00:10:14.390 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.390 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.960 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:14.960 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.960 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.960 [2024-11-21 04:08:14.704298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:14.960 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.960 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:14.960 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.960 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.960 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.960 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.960 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.960 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.960 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.960 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.960 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.960 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.960 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.960 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.960 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.960 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.960 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.960 "name": "Existed_Raid", 00:10:14.960 "uuid": "7fff2b82-33e6-4ae8-8741-5c555d43d202", 00:10:14.960 "strip_size_kb": 64, 00:10:14.960 "state": "configuring", 00:10:14.960 "raid_level": "raid0", 00:10:14.960 "superblock": true, 00:10:14.960 "num_base_bdevs": 4, 00:10:14.960 "num_base_bdevs_discovered": 2, 00:10:14.960 "num_base_bdevs_operational": 4, 00:10:14.960 "base_bdevs_list": [ 00:10:14.960 { 00:10:14.960 "name": "BaseBdev1", 00:10:14.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.960 "is_configured": false, 00:10:14.960 "data_offset": 0, 00:10:14.960 "data_size": 0 00:10:14.960 }, 00:10:14.960 { 00:10:14.960 "name": null, 00:10:14.960 "uuid": "f38993d4-852c-4135-8209-dcf5d59f40d3", 00:10:14.960 "is_configured": false, 00:10:14.960 "data_offset": 0, 00:10:14.960 "data_size": 63488 00:10:14.960 }, 00:10:14.960 { 00:10:14.960 "name": "BaseBdev3", 00:10:14.960 "uuid": "390d3d2d-0ac2-4113-9b6a-35d26afb8ec2", 00:10:14.960 "is_configured": true, 00:10:14.960 "data_offset": 2048, 00:10:14.960 "data_size": 63488 00:10:14.960 }, 00:10:14.960 { 00:10:14.960 "name": "BaseBdev4", 00:10:14.960 "uuid": "97949e50-9c7f-4b06-ad5c-d1d9c76a3c96", 00:10:14.960 "is_configured": true, 00:10:14.960 "data_offset": 2048, 00:10:14.960 "data_size": 63488 00:10:14.960 } 00:10:14.960 ] 00:10:14.960 }' 00:10:14.960 04:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.960 04:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.221 [2024-11-21 04:08:15.137429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:15.221 BaseBdev1 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.221 [ 00:10:15.221 { 00:10:15.221 "name": "BaseBdev1", 00:10:15.221 "aliases": [ 00:10:15.221 "6058097a-e31a-485a-90c8-bfeb3167d2e7" 00:10:15.221 ], 00:10:15.221 "product_name": "Malloc disk", 00:10:15.221 "block_size": 512, 00:10:15.221 "num_blocks": 65536, 00:10:15.221 "uuid": "6058097a-e31a-485a-90c8-bfeb3167d2e7", 00:10:15.221 "assigned_rate_limits": { 00:10:15.221 "rw_ios_per_sec": 0, 00:10:15.221 "rw_mbytes_per_sec": 0, 00:10:15.221 "r_mbytes_per_sec": 0, 00:10:15.221 "w_mbytes_per_sec": 0 00:10:15.221 }, 00:10:15.221 "claimed": true, 00:10:15.221 "claim_type": "exclusive_write", 00:10:15.221 "zoned": false, 00:10:15.221 "supported_io_types": { 00:10:15.221 "read": true, 00:10:15.221 "write": true, 00:10:15.221 "unmap": true, 00:10:15.221 "flush": true, 00:10:15.221 "reset": true, 00:10:15.221 "nvme_admin": false, 00:10:15.221 "nvme_io": false, 00:10:15.221 "nvme_io_md": false, 00:10:15.221 "write_zeroes": true, 00:10:15.221 "zcopy": true, 00:10:15.221 "get_zone_info": false, 00:10:15.221 "zone_management": false, 00:10:15.221 "zone_append": false, 00:10:15.221 "compare": false, 00:10:15.221 "compare_and_write": false, 00:10:15.221 "abort": true, 00:10:15.221 "seek_hole": false, 00:10:15.221 "seek_data": false, 00:10:15.221 "copy": true, 00:10:15.221 "nvme_iov_md": false 00:10:15.221 }, 00:10:15.221 "memory_domains": [ 00:10:15.221 { 00:10:15.221 "dma_device_id": "system", 00:10:15.221 "dma_device_type": 1 00:10:15.221 }, 00:10:15.221 { 00:10:15.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.221 "dma_device_type": 2 00:10:15.221 } 00:10:15.221 ], 00:10:15.221 "driver_specific": {} 00:10:15.221 } 00:10:15.221 ] 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.221 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.222 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.222 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.222 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.482 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.483 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.483 "name": "Existed_Raid", 00:10:15.483 "uuid": "7fff2b82-33e6-4ae8-8741-5c555d43d202", 00:10:15.483 "strip_size_kb": 64, 00:10:15.483 "state": "configuring", 00:10:15.483 "raid_level": "raid0", 00:10:15.483 "superblock": true, 00:10:15.483 "num_base_bdevs": 4, 00:10:15.483 "num_base_bdevs_discovered": 3, 00:10:15.483 "num_base_bdevs_operational": 4, 00:10:15.483 "base_bdevs_list": [ 00:10:15.483 { 00:10:15.483 "name": "BaseBdev1", 00:10:15.483 "uuid": "6058097a-e31a-485a-90c8-bfeb3167d2e7", 00:10:15.483 "is_configured": true, 00:10:15.483 "data_offset": 2048, 00:10:15.483 "data_size": 63488 00:10:15.483 }, 00:10:15.483 { 00:10:15.483 "name": null, 00:10:15.483 "uuid": "f38993d4-852c-4135-8209-dcf5d59f40d3", 00:10:15.483 "is_configured": false, 00:10:15.483 "data_offset": 0, 00:10:15.483 "data_size": 63488 00:10:15.483 }, 00:10:15.483 { 00:10:15.483 "name": "BaseBdev3", 00:10:15.483 "uuid": "390d3d2d-0ac2-4113-9b6a-35d26afb8ec2", 00:10:15.483 "is_configured": true, 00:10:15.483 "data_offset": 2048, 00:10:15.483 "data_size": 63488 00:10:15.483 }, 00:10:15.483 { 00:10:15.483 "name": "BaseBdev4", 00:10:15.483 "uuid": "97949e50-9c7f-4b06-ad5c-d1d9c76a3c96", 00:10:15.483 "is_configured": true, 00:10:15.483 "data_offset": 2048, 00:10:15.483 "data_size": 63488 00:10:15.483 } 00:10:15.483 ] 00:10:15.483 }' 00:10:15.483 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.483 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.743 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.743 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.743 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.743 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:15.743 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.743 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:15.743 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:15.743 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.743 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.743 [2024-11-21 04:08:15.644691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:15.744 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.744 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:15.744 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.744 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.744 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.744 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.744 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.744 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.744 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.744 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.744 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.744 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.744 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.744 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.744 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.744 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.744 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.744 "name": "Existed_Raid", 00:10:15.744 "uuid": "7fff2b82-33e6-4ae8-8741-5c555d43d202", 00:10:15.744 "strip_size_kb": 64, 00:10:15.744 "state": "configuring", 00:10:15.744 "raid_level": "raid0", 00:10:15.744 "superblock": true, 00:10:15.744 "num_base_bdevs": 4, 00:10:15.744 "num_base_bdevs_discovered": 2, 00:10:15.744 "num_base_bdevs_operational": 4, 00:10:15.744 "base_bdevs_list": [ 00:10:15.744 { 00:10:15.744 "name": "BaseBdev1", 00:10:15.744 "uuid": "6058097a-e31a-485a-90c8-bfeb3167d2e7", 00:10:15.744 "is_configured": true, 00:10:15.744 "data_offset": 2048, 00:10:15.744 "data_size": 63488 00:10:15.744 }, 00:10:15.744 { 00:10:15.744 "name": null, 00:10:15.744 "uuid": "f38993d4-852c-4135-8209-dcf5d59f40d3", 00:10:15.744 "is_configured": false, 00:10:15.744 "data_offset": 0, 00:10:15.744 "data_size": 63488 00:10:15.744 }, 00:10:15.744 { 00:10:15.744 "name": null, 00:10:15.744 "uuid": "390d3d2d-0ac2-4113-9b6a-35d26afb8ec2", 00:10:15.744 "is_configured": false, 00:10:15.744 "data_offset": 0, 00:10:15.744 "data_size": 63488 00:10:15.744 }, 00:10:15.744 { 00:10:15.744 "name": "BaseBdev4", 00:10:15.744 "uuid": "97949e50-9c7f-4b06-ad5c-d1d9c76a3c96", 00:10:15.744 "is_configured": true, 00:10:15.744 "data_offset": 2048, 00:10:15.744 "data_size": 63488 00:10:15.744 } 00:10:15.744 ] 00:10:15.744 }' 00:10:15.744 04:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.744 04:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.315 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:16.315 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.315 04:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.315 04:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.315 04:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.315 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:16.315 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:16.315 04:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.315 04:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.315 [2024-11-21 04:08:16.111903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:16.315 04:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.315 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:16.315 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.315 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.315 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.315 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.315 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.315 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.315 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.315 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.315 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.315 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.315 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.315 04:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.315 04:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.315 04:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.315 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.315 "name": "Existed_Raid", 00:10:16.315 "uuid": "7fff2b82-33e6-4ae8-8741-5c555d43d202", 00:10:16.315 "strip_size_kb": 64, 00:10:16.315 "state": "configuring", 00:10:16.315 "raid_level": "raid0", 00:10:16.315 "superblock": true, 00:10:16.316 "num_base_bdevs": 4, 00:10:16.316 "num_base_bdevs_discovered": 3, 00:10:16.316 "num_base_bdevs_operational": 4, 00:10:16.316 "base_bdevs_list": [ 00:10:16.316 { 00:10:16.316 "name": "BaseBdev1", 00:10:16.316 "uuid": "6058097a-e31a-485a-90c8-bfeb3167d2e7", 00:10:16.316 "is_configured": true, 00:10:16.316 "data_offset": 2048, 00:10:16.316 "data_size": 63488 00:10:16.316 }, 00:10:16.316 { 00:10:16.316 "name": null, 00:10:16.316 "uuid": "f38993d4-852c-4135-8209-dcf5d59f40d3", 00:10:16.316 "is_configured": false, 00:10:16.316 "data_offset": 0, 00:10:16.316 "data_size": 63488 00:10:16.316 }, 00:10:16.316 { 00:10:16.316 "name": "BaseBdev3", 00:10:16.316 "uuid": "390d3d2d-0ac2-4113-9b6a-35d26afb8ec2", 00:10:16.316 "is_configured": true, 00:10:16.316 "data_offset": 2048, 00:10:16.316 "data_size": 63488 00:10:16.316 }, 00:10:16.316 { 00:10:16.316 "name": "BaseBdev4", 00:10:16.316 "uuid": "97949e50-9c7f-4b06-ad5c-d1d9c76a3c96", 00:10:16.316 "is_configured": true, 00:10:16.316 "data_offset": 2048, 00:10:16.316 "data_size": 63488 00:10:16.316 } 00:10:16.316 ] 00:10:16.316 }' 00:10:16.316 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.316 04:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.886 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.886 04:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.886 04:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.886 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:16.886 04:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.886 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:16.886 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:16.886 04:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.886 04:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.886 [2024-11-21 04:08:16.599145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:16.886 04:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.886 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:16.886 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.886 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.886 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.886 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.886 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.886 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.886 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.886 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.887 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.887 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.887 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.887 04:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.887 04:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.887 04:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.887 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.887 "name": "Existed_Raid", 00:10:16.887 "uuid": "7fff2b82-33e6-4ae8-8741-5c555d43d202", 00:10:16.887 "strip_size_kb": 64, 00:10:16.887 "state": "configuring", 00:10:16.887 "raid_level": "raid0", 00:10:16.887 "superblock": true, 00:10:16.887 "num_base_bdevs": 4, 00:10:16.887 "num_base_bdevs_discovered": 2, 00:10:16.887 "num_base_bdevs_operational": 4, 00:10:16.887 "base_bdevs_list": [ 00:10:16.887 { 00:10:16.887 "name": null, 00:10:16.887 "uuid": "6058097a-e31a-485a-90c8-bfeb3167d2e7", 00:10:16.887 "is_configured": false, 00:10:16.887 "data_offset": 0, 00:10:16.887 "data_size": 63488 00:10:16.887 }, 00:10:16.887 { 00:10:16.887 "name": null, 00:10:16.887 "uuid": "f38993d4-852c-4135-8209-dcf5d59f40d3", 00:10:16.887 "is_configured": false, 00:10:16.887 "data_offset": 0, 00:10:16.887 "data_size": 63488 00:10:16.887 }, 00:10:16.887 { 00:10:16.887 "name": "BaseBdev3", 00:10:16.887 "uuid": "390d3d2d-0ac2-4113-9b6a-35d26afb8ec2", 00:10:16.887 "is_configured": true, 00:10:16.887 "data_offset": 2048, 00:10:16.887 "data_size": 63488 00:10:16.887 }, 00:10:16.887 { 00:10:16.887 "name": "BaseBdev4", 00:10:16.887 "uuid": "97949e50-9c7f-4b06-ad5c-d1d9c76a3c96", 00:10:16.887 "is_configured": true, 00:10:16.887 "data_offset": 2048, 00:10:16.887 "data_size": 63488 00:10:16.887 } 00:10:16.887 ] 00:10:16.887 }' 00:10:16.887 04:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.887 04:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.147 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:17.147 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.147 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.147 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.147 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.147 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:17.147 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:17.147 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.147 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.147 [2024-11-21 04:08:17.063435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:17.147 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.147 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:17.147 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.147 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.147 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.147 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.147 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.147 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.147 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.147 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.147 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.147 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.147 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.147 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.147 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.147 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.408 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.408 "name": "Existed_Raid", 00:10:17.408 "uuid": "7fff2b82-33e6-4ae8-8741-5c555d43d202", 00:10:17.408 "strip_size_kb": 64, 00:10:17.408 "state": "configuring", 00:10:17.408 "raid_level": "raid0", 00:10:17.408 "superblock": true, 00:10:17.408 "num_base_bdevs": 4, 00:10:17.408 "num_base_bdevs_discovered": 3, 00:10:17.408 "num_base_bdevs_operational": 4, 00:10:17.408 "base_bdevs_list": [ 00:10:17.408 { 00:10:17.408 "name": null, 00:10:17.408 "uuid": "6058097a-e31a-485a-90c8-bfeb3167d2e7", 00:10:17.408 "is_configured": false, 00:10:17.408 "data_offset": 0, 00:10:17.408 "data_size": 63488 00:10:17.408 }, 00:10:17.408 { 00:10:17.408 "name": "BaseBdev2", 00:10:17.408 "uuid": "f38993d4-852c-4135-8209-dcf5d59f40d3", 00:10:17.408 "is_configured": true, 00:10:17.408 "data_offset": 2048, 00:10:17.408 "data_size": 63488 00:10:17.408 }, 00:10:17.408 { 00:10:17.408 "name": "BaseBdev3", 00:10:17.408 "uuid": "390d3d2d-0ac2-4113-9b6a-35d26afb8ec2", 00:10:17.408 "is_configured": true, 00:10:17.408 "data_offset": 2048, 00:10:17.408 "data_size": 63488 00:10:17.408 }, 00:10:17.408 { 00:10:17.408 "name": "BaseBdev4", 00:10:17.408 "uuid": "97949e50-9c7f-4b06-ad5c-d1d9c76a3c96", 00:10:17.408 "is_configured": true, 00:10:17.408 "data_offset": 2048, 00:10:17.408 "data_size": 63488 00:10:17.408 } 00:10:17.408 ] 00:10:17.408 }' 00:10:17.408 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.408 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6058097a-e31a-485a-90c8-bfeb3167d2e7 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.671 [2024-11-21 04:08:17.592400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:17.671 [2024-11-21 04:08:17.592630] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:17.671 [2024-11-21 04:08:17.592646] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:17.671 [2024-11-21 04:08:17.592993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:17.671 [2024-11-21 04:08:17.593129] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:17.671 [2024-11-21 04:08:17.593147] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:17.671 [2024-11-21 04:08:17.593297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.671 NewBaseBdev 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.671 [ 00:10:17.671 { 00:10:17.671 "name": "NewBaseBdev", 00:10:17.671 "aliases": [ 00:10:17.671 "6058097a-e31a-485a-90c8-bfeb3167d2e7" 00:10:17.671 ], 00:10:17.671 "product_name": "Malloc disk", 00:10:17.671 "block_size": 512, 00:10:17.671 "num_blocks": 65536, 00:10:17.671 "uuid": "6058097a-e31a-485a-90c8-bfeb3167d2e7", 00:10:17.671 "assigned_rate_limits": { 00:10:17.671 "rw_ios_per_sec": 0, 00:10:17.671 "rw_mbytes_per_sec": 0, 00:10:17.671 "r_mbytes_per_sec": 0, 00:10:17.671 "w_mbytes_per_sec": 0 00:10:17.671 }, 00:10:17.671 "claimed": true, 00:10:17.671 "claim_type": "exclusive_write", 00:10:17.671 "zoned": false, 00:10:17.671 "supported_io_types": { 00:10:17.671 "read": true, 00:10:17.671 "write": true, 00:10:17.671 "unmap": true, 00:10:17.671 "flush": true, 00:10:17.671 "reset": true, 00:10:17.671 "nvme_admin": false, 00:10:17.671 "nvme_io": false, 00:10:17.671 "nvme_io_md": false, 00:10:17.671 "write_zeroes": true, 00:10:17.671 "zcopy": true, 00:10:17.671 "get_zone_info": false, 00:10:17.671 "zone_management": false, 00:10:17.671 "zone_append": false, 00:10:17.671 "compare": false, 00:10:17.671 "compare_and_write": false, 00:10:17.671 "abort": true, 00:10:17.671 "seek_hole": false, 00:10:17.671 "seek_data": false, 00:10:17.671 "copy": true, 00:10:17.671 "nvme_iov_md": false 00:10:17.671 }, 00:10:17.671 "memory_domains": [ 00:10:17.671 { 00:10:17.671 "dma_device_id": "system", 00:10:17.671 "dma_device_type": 1 00:10:17.671 }, 00:10:17.671 { 00:10:17.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.671 "dma_device_type": 2 00:10:17.671 } 00:10:17.671 ], 00:10:17.671 "driver_specific": {} 00:10:17.671 } 00:10:17.671 ] 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.671 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.931 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.931 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.931 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.931 "name": "Existed_Raid", 00:10:17.931 "uuid": "7fff2b82-33e6-4ae8-8741-5c555d43d202", 00:10:17.931 "strip_size_kb": 64, 00:10:17.931 "state": "online", 00:10:17.931 "raid_level": "raid0", 00:10:17.931 "superblock": true, 00:10:17.931 "num_base_bdevs": 4, 00:10:17.931 "num_base_bdevs_discovered": 4, 00:10:17.931 "num_base_bdevs_operational": 4, 00:10:17.931 "base_bdevs_list": [ 00:10:17.931 { 00:10:17.931 "name": "NewBaseBdev", 00:10:17.931 "uuid": "6058097a-e31a-485a-90c8-bfeb3167d2e7", 00:10:17.931 "is_configured": true, 00:10:17.931 "data_offset": 2048, 00:10:17.931 "data_size": 63488 00:10:17.931 }, 00:10:17.931 { 00:10:17.931 "name": "BaseBdev2", 00:10:17.931 "uuid": "f38993d4-852c-4135-8209-dcf5d59f40d3", 00:10:17.931 "is_configured": true, 00:10:17.931 "data_offset": 2048, 00:10:17.931 "data_size": 63488 00:10:17.931 }, 00:10:17.931 { 00:10:17.931 "name": "BaseBdev3", 00:10:17.931 "uuid": "390d3d2d-0ac2-4113-9b6a-35d26afb8ec2", 00:10:17.931 "is_configured": true, 00:10:17.931 "data_offset": 2048, 00:10:17.931 "data_size": 63488 00:10:17.931 }, 00:10:17.931 { 00:10:17.931 "name": "BaseBdev4", 00:10:17.931 "uuid": "97949e50-9c7f-4b06-ad5c-d1d9c76a3c96", 00:10:17.931 "is_configured": true, 00:10:17.931 "data_offset": 2048, 00:10:17.931 "data_size": 63488 00:10:17.931 } 00:10:17.931 ] 00:10:17.931 }' 00:10:17.932 04:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.932 04:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.192 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:18.192 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:18.192 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:18.192 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:18.192 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:18.192 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:18.192 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:18.192 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:18.192 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.192 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.192 [2024-11-21 04:08:18.040109] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.192 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.192 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:18.192 "name": "Existed_Raid", 00:10:18.192 "aliases": [ 00:10:18.192 "7fff2b82-33e6-4ae8-8741-5c555d43d202" 00:10:18.192 ], 00:10:18.192 "product_name": "Raid Volume", 00:10:18.192 "block_size": 512, 00:10:18.192 "num_blocks": 253952, 00:10:18.192 "uuid": "7fff2b82-33e6-4ae8-8741-5c555d43d202", 00:10:18.192 "assigned_rate_limits": { 00:10:18.192 "rw_ios_per_sec": 0, 00:10:18.192 "rw_mbytes_per_sec": 0, 00:10:18.192 "r_mbytes_per_sec": 0, 00:10:18.192 "w_mbytes_per_sec": 0 00:10:18.192 }, 00:10:18.192 "claimed": false, 00:10:18.192 "zoned": false, 00:10:18.192 "supported_io_types": { 00:10:18.192 "read": true, 00:10:18.192 "write": true, 00:10:18.192 "unmap": true, 00:10:18.192 "flush": true, 00:10:18.192 "reset": true, 00:10:18.192 "nvme_admin": false, 00:10:18.192 "nvme_io": false, 00:10:18.192 "nvme_io_md": false, 00:10:18.192 "write_zeroes": true, 00:10:18.192 "zcopy": false, 00:10:18.192 "get_zone_info": false, 00:10:18.192 "zone_management": false, 00:10:18.192 "zone_append": false, 00:10:18.192 "compare": false, 00:10:18.192 "compare_and_write": false, 00:10:18.192 "abort": false, 00:10:18.192 "seek_hole": false, 00:10:18.192 "seek_data": false, 00:10:18.192 "copy": false, 00:10:18.192 "nvme_iov_md": false 00:10:18.192 }, 00:10:18.192 "memory_domains": [ 00:10:18.192 { 00:10:18.192 "dma_device_id": "system", 00:10:18.192 "dma_device_type": 1 00:10:18.192 }, 00:10:18.192 { 00:10:18.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.192 "dma_device_type": 2 00:10:18.192 }, 00:10:18.192 { 00:10:18.192 "dma_device_id": "system", 00:10:18.192 "dma_device_type": 1 00:10:18.192 }, 00:10:18.192 { 00:10:18.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.192 "dma_device_type": 2 00:10:18.192 }, 00:10:18.192 { 00:10:18.192 "dma_device_id": "system", 00:10:18.192 "dma_device_type": 1 00:10:18.192 }, 00:10:18.192 { 00:10:18.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.192 "dma_device_type": 2 00:10:18.192 }, 00:10:18.192 { 00:10:18.192 "dma_device_id": "system", 00:10:18.192 "dma_device_type": 1 00:10:18.192 }, 00:10:18.192 { 00:10:18.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.192 "dma_device_type": 2 00:10:18.192 } 00:10:18.192 ], 00:10:18.192 "driver_specific": { 00:10:18.192 "raid": { 00:10:18.192 "uuid": "7fff2b82-33e6-4ae8-8741-5c555d43d202", 00:10:18.192 "strip_size_kb": 64, 00:10:18.192 "state": "online", 00:10:18.192 "raid_level": "raid0", 00:10:18.192 "superblock": true, 00:10:18.192 "num_base_bdevs": 4, 00:10:18.192 "num_base_bdevs_discovered": 4, 00:10:18.192 "num_base_bdevs_operational": 4, 00:10:18.192 "base_bdevs_list": [ 00:10:18.192 { 00:10:18.192 "name": "NewBaseBdev", 00:10:18.192 "uuid": "6058097a-e31a-485a-90c8-bfeb3167d2e7", 00:10:18.192 "is_configured": true, 00:10:18.192 "data_offset": 2048, 00:10:18.192 "data_size": 63488 00:10:18.192 }, 00:10:18.192 { 00:10:18.192 "name": "BaseBdev2", 00:10:18.192 "uuid": "f38993d4-852c-4135-8209-dcf5d59f40d3", 00:10:18.192 "is_configured": true, 00:10:18.192 "data_offset": 2048, 00:10:18.192 "data_size": 63488 00:10:18.192 }, 00:10:18.192 { 00:10:18.192 "name": "BaseBdev3", 00:10:18.192 "uuid": "390d3d2d-0ac2-4113-9b6a-35d26afb8ec2", 00:10:18.192 "is_configured": true, 00:10:18.192 "data_offset": 2048, 00:10:18.192 "data_size": 63488 00:10:18.192 }, 00:10:18.192 { 00:10:18.192 "name": "BaseBdev4", 00:10:18.192 "uuid": "97949e50-9c7f-4b06-ad5c-d1d9c76a3c96", 00:10:18.192 "is_configured": true, 00:10:18.192 "data_offset": 2048, 00:10:18.192 "data_size": 63488 00:10:18.192 } 00:10:18.192 ] 00:10:18.192 } 00:10:18.192 } 00:10:18.192 }' 00:10:18.192 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:18.192 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:18.192 BaseBdev2 00:10:18.192 BaseBdev3 00:10:18.192 BaseBdev4' 00:10:18.192 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.192 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:18.192 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.192 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:18.192 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.192 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.192 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.454 [2024-11-21 04:08:18.343235] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:18.454 [2024-11-21 04:08:18.343276] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:18.454 [2024-11-21 04:08:18.343405] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.454 [2024-11-21 04:08:18.343536] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:18.454 [2024-11-21 04:08:18.343556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81030 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81030 ']' 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 81030 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81030 00:10:18.454 killing process with pid 81030 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81030' 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 81030 00:10:18.454 [2024-11-21 04:08:18.392005] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:18.454 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 81030 00:10:18.715 [2024-11-21 04:08:18.476622] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:18.974 ************************************ 00:10:18.974 END TEST raid_state_function_test_sb 00:10:18.974 ************************************ 00:10:18.974 04:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:18.974 00:10:18.974 real 0m9.395s 00:10:18.974 user 0m15.619s 00:10:18.974 sys 0m2.038s 00:10:18.974 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.974 04:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.974 04:08:18 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:18.974 04:08:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:18.974 04:08:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.974 04:08:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:18.974 ************************************ 00:10:18.974 START TEST raid_superblock_test 00:10:18.974 ************************************ 00:10:18.974 04:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:18.974 04:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:18.974 04:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:18.974 04:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:18.974 04:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:18.974 04:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:18.974 04:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:18.974 04:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:18.974 04:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:18.975 04:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:18.975 04:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:18.975 04:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:18.975 04:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:18.975 04:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:18.975 04:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:18.975 04:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:18.975 04:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:18.975 04:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81678 00:10:18.975 04:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:18.975 04:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81678 00:10:18.975 04:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81678 ']' 00:10:18.975 04:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.975 04:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.975 04:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.975 04:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.975 04:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.233 [2024-11-21 04:08:18.955599] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:10:19.233 [2024-11-21 04:08:18.955779] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81678 ] 00:10:19.233 [2024-11-21 04:08:19.109970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.233 [2024-11-21 04:08:19.153279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.492 [2024-11-21 04:08:19.234380] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.492 [2024-11-21 04:08:19.234426] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.060 malloc1 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.060 [2024-11-21 04:08:19.844471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:20.060 [2024-11-21 04:08:19.844541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.060 [2024-11-21 04:08:19.844571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:20.060 [2024-11-21 04:08:19.844588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.060 [2024-11-21 04:08:19.847178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.060 [2024-11-21 04:08:19.847231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:20.060 pt1 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.060 malloc2 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.060 [2024-11-21 04:08:19.880091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:20.060 [2024-11-21 04:08:19.880145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.060 [2024-11-21 04:08:19.880178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:20.060 [2024-11-21 04:08:19.880190] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.060 [2024-11-21 04:08:19.882717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.060 [2024-11-21 04:08:19.882754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:20.060 pt2 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.060 malloc3 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.060 [2024-11-21 04:08:19.915692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:20.060 [2024-11-21 04:08:19.915776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.060 [2024-11-21 04:08:19.915800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:20.060 [2024-11-21 04:08:19.915812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.060 [2024-11-21 04:08:19.918347] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.060 [2024-11-21 04:08:19.918386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:20.060 pt3 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.060 malloc4 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.060 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.060 [2024-11-21 04:08:19.959693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:20.060 [2024-11-21 04:08:19.959754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.061 [2024-11-21 04:08:19.959787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:20.061 [2024-11-21 04:08:19.959802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.061 [2024-11-21 04:08:19.962332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.061 [2024-11-21 04:08:19.962370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:20.061 pt4 00:10:20.061 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.061 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:20.061 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.061 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:20.061 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.061 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.061 [2024-11-21 04:08:19.971695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:20.061 [2024-11-21 04:08:19.973948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:20.061 [2024-11-21 04:08:19.974018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:20.061 [2024-11-21 04:08:19.974065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:20.061 [2024-11-21 04:08:19.974260] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:20.061 [2024-11-21 04:08:19.974278] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:20.061 [2024-11-21 04:08:19.974572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:20.061 [2024-11-21 04:08:19.974760] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:20.061 [2024-11-21 04:08:19.974779] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:20.061 [2024-11-21 04:08:19.974935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.061 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.061 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:20.061 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.061 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.061 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.061 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.061 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.061 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.061 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.061 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.061 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.061 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.061 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.061 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.061 04:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.061 04:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.061 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.061 "name": "raid_bdev1", 00:10:20.061 "uuid": "a34aea81-000e-4e88-9130-579428148110", 00:10:20.061 "strip_size_kb": 64, 00:10:20.061 "state": "online", 00:10:20.061 "raid_level": "raid0", 00:10:20.061 "superblock": true, 00:10:20.061 "num_base_bdevs": 4, 00:10:20.061 "num_base_bdevs_discovered": 4, 00:10:20.061 "num_base_bdevs_operational": 4, 00:10:20.061 "base_bdevs_list": [ 00:10:20.061 { 00:10:20.061 "name": "pt1", 00:10:20.061 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.061 "is_configured": true, 00:10:20.061 "data_offset": 2048, 00:10:20.061 "data_size": 63488 00:10:20.061 }, 00:10:20.061 { 00:10:20.061 "name": "pt2", 00:10:20.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.061 "is_configured": true, 00:10:20.061 "data_offset": 2048, 00:10:20.061 "data_size": 63488 00:10:20.061 }, 00:10:20.061 { 00:10:20.061 "name": "pt3", 00:10:20.061 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.061 "is_configured": true, 00:10:20.061 "data_offset": 2048, 00:10:20.061 "data_size": 63488 00:10:20.061 }, 00:10:20.061 { 00:10:20.061 "name": "pt4", 00:10:20.061 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:20.061 "is_configured": true, 00:10:20.061 "data_offset": 2048, 00:10:20.061 "data_size": 63488 00:10:20.061 } 00:10:20.061 ] 00:10:20.061 }' 00:10:20.061 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.061 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.629 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:20.629 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:20.629 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:20.629 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:20.629 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:20.629 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:20.629 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:20.629 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:20.629 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.629 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.629 [2024-11-21 04:08:20.435372] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.629 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.629 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:20.629 "name": "raid_bdev1", 00:10:20.629 "aliases": [ 00:10:20.629 "a34aea81-000e-4e88-9130-579428148110" 00:10:20.629 ], 00:10:20.629 "product_name": "Raid Volume", 00:10:20.629 "block_size": 512, 00:10:20.629 "num_blocks": 253952, 00:10:20.629 "uuid": "a34aea81-000e-4e88-9130-579428148110", 00:10:20.629 "assigned_rate_limits": { 00:10:20.629 "rw_ios_per_sec": 0, 00:10:20.629 "rw_mbytes_per_sec": 0, 00:10:20.629 "r_mbytes_per_sec": 0, 00:10:20.629 "w_mbytes_per_sec": 0 00:10:20.629 }, 00:10:20.629 "claimed": false, 00:10:20.629 "zoned": false, 00:10:20.629 "supported_io_types": { 00:10:20.629 "read": true, 00:10:20.629 "write": true, 00:10:20.629 "unmap": true, 00:10:20.629 "flush": true, 00:10:20.629 "reset": true, 00:10:20.629 "nvme_admin": false, 00:10:20.629 "nvme_io": false, 00:10:20.629 "nvme_io_md": false, 00:10:20.629 "write_zeroes": true, 00:10:20.629 "zcopy": false, 00:10:20.629 "get_zone_info": false, 00:10:20.629 "zone_management": false, 00:10:20.629 "zone_append": false, 00:10:20.629 "compare": false, 00:10:20.629 "compare_and_write": false, 00:10:20.629 "abort": false, 00:10:20.629 "seek_hole": false, 00:10:20.629 "seek_data": false, 00:10:20.629 "copy": false, 00:10:20.629 "nvme_iov_md": false 00:10:20.629 }, 00:10:20.629 "memory_domains": [ 00:10:20.629 { 00:10:20.629 "dma_device_id": "system", 00:10:20.629 "dma_device_type": 1 00:10:20.629 }, 00:10:20.629 { 00:10:20.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.629 "dma_device_type": 2 00:10:20.629 }, 00:10:20.629 { 00:10:20.629 "dma_device_id": "system", 00:10:20.629 "dma_device_type": 1 00:10:20.629 }, 00:10:20.629 { 00:10:20.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.629 "dma_device_type": 2 00:10:20.629 }, 00:10:20.629 { 00:10:20.629 "dma_device_id": "system", 00:10:20.629 "dma_device_type": 1 00:10:20.629 }, 00:10:20.629 { 00:10:20.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.629 "dma_device_type": 2 00:10:20.630 }, 00:10:20.630 { 00:10:20.630 "dma_device_id": "system", 00:10:20.630 "dma_device_type": 1 00:10:20.630 }, 00:10:20.630 { 00:10:20.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.630 "dma_device_type": 2 00:10:20.630 } 00:10:20.630 ], 00:10:20.630 "driver_specific": { 00:10:20.630 "raid": { 00:10:20.630 "uuid": "a34aea81-000e-4e88-9130-579428148110", 00:10:20.630 "strip_size_kb": 64, 00:10:20.630 "state": "online", 00:10:20.630 "raid_level": "raid0", 00:10:20.630 "superblock": true, 00:10:20.630 "num_base_bdevs": 4, 00:10:20.630 "num_base_bdevs_discovered": 4, 00:10:20.630 "num_base_bdevs_operational": 4, 00:10:20.630 "base_bdevs_list": [ 00:10:20.630 { 00:10:20.630 "name": "pt1", 00:10:20.630 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.630 "is_configured": true, 00:10:20.630 "data_offset": 2048, 00:10:20.630 "data_size": 63488 00:10:20.630 }, 00:10:20.630 { 00:10:20.630 "name": "pt2", 00:10:20.630 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.630 "is_configured": true, 00:10:20.630 "data_offset": 2048, 00:10:20.630 "data_size": 63488 00:10:20.630 }, 00:10:20.630 { 00:10:20.630 "name": "pt3", 00:10:20.630 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.630 "is_configured": true, 00:10:20.630 "data_offset": 2048, 00:10:20.630 "data_size": 63488 00:10:20.630 }, 00:10:20.630 { 00:10:20.630 "name": "pt4", 00:10:20.630 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:20.630 "is_configured": true, 00:10:20.630 "data_offset": 2048, 00:10:20.630 "data_size": 63488 00:10:20.630 } 00:10:20.630 ] 00:10:20.630 } 00:10:20.630 } 00:10:20.630 }' 00:10:20.630 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:20.630 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:20.630 pt2 00:10:20.630 pt3 00:10:20.630 pt4' 00:10:20.630 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.630 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:20.630 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.630 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.630 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:20.630 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.630 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.630 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.630 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.630 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.630 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.630 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:20.630 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.630 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.630 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:20.890 [2024-11-21 04:08:20.730735] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a34aea81-000e-4e88-9130-579428148110 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a34aea81-000e-4e88-9130-579428148110 ']' 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.890 [2024-11-21 04:08:20.778377] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:20.890 [2024-11-21 04:08:20.778419] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.890 [2024-11-21 04:08:20.778534] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.890 [2024-11-21 04:08:20.778701] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.890 [2024-11-21 04:08:20.778720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.890 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.151 [2024-11-21 04:08:20.938175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:21.151 [2024-11-21 04:08:20.940449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:21.151 [2024-11-21 04:08:20.940503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:21.151 [2024-11-21 04:08:20.940536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:21.151 [2024-11-21 04:08:20.940594] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:21.151 [2024-11-21 04:08:20.940648] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:21.151 [2024-11-21 04:08:20.940669] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:21.151 [2024-11-21 04:08:20.940685] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:21.151 [2024-11-21 04:08:20.940700] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.151 [2024-11-21 04:08:20.940711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:10:21.151 request: 00:10:21.151 { 00:10:21.151 "name": "raid_bdev1", 00:10:21.151 "raid_level": "raid0", 00:10:21.151 "base_bdevs": [ 00:10:21.151 "malloc1", 00:10:21.151 "malloc2", 00:10:21.151 "malloc3", 00:10:21.151 "malloc4" 00:10:21.151 ], 00:10:21.151 "strip_size_kb": 64, 00:10:21.151 "superblock": false, 00:10:21.151 "method": "bdev_raid_create", 00:10:21.151 "req_id": 1 00:10:21.151 } 00:10:21.151 Got JSON-RPC error response 00:10:21.151 response: 00:10:21.151 { 00:10:21.151 "code": -17, 00:10:21.151 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:21.151 } 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.151 [2024-11-21 04:08:20.990007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:21.151 [2024-11-21 04:08:20.990069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.151 [2024-11-21 04:08:20.990094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:21.151 [2024-11-21 04:08:20.990103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.151 [2024-11-21 04:08:20.992695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.151 [2024-11-21 04:08:20.992747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:21.151 [2024-11-21 04:08:20.992837] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:21.151 [2024-11-21 04:08:20.992903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:21.151 pt1 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.151 04:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.151 04:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.151 04:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.151 04:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.151 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.151 "name": "raid_bdev1", 00:10:21.151 "uuid": "a34aea81-000e-4e88-9130-579428148110", 00:10:21.151 "strip_size_kb": 64, 00:10:21.151 "state": "configuring", 00:10:21.151 "raid_level": "raid0", 00:10:21.151 "superblock": true, 00:10:21.151 "num_base_bdevs": 4, 00:10:21.151 "num_base_bdevs_discovered": 1, 00:10:21.151 "num_base_bdevs_operational": 4, 00:10:21.151 "base_bdevs_list": [ 00:10:21.151 { 00:10:21.151 "name": "pt1", 00:10:21.151 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:21.151 "is_configured": true, 00:10:21.151 "data_offset": 2048, 00:10:21.151 "data_size": 63488 00:10:21.151 }, 00:10:21.151 { 00:10:21.151 "name": null, 00:10:21.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.151 "is_configured": false, 00:10:21.151 "data_offset": 2048, 00:10:21.151 "data_size": 63488 00:10:21.151 }, 00:10:21.151 { 00:10:21.151 "name": null, 00:10:21.151 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.151 "is_configured": false, 00:10:21.151 "data_offset": 2048, 00:10:21.151 "data_size": 63488 00:10:21.151 }, 00:10:21.151 { 00:10:21.151 "name": null, 00:10:21.151 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:21.151 "is_configured": false, 00:10:21.151 "data_offset": 2048, 00:10:21.151 "data_size": 63488 00:10:21.151 } 00:10:21.151 ] 00:10:21.152 }' 00:10:21.152 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.152 04:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.722 [2024-11-21 04:08:21.445271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:21.722 [2024-11-21 04:08:21.445344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.722 [2024-11-21 04:08:21.445373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:21.722 [2024-11-21 04:08:21.445387] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.722 [2024-11-21 04:08:21.445928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.722 [2024-11-21 04:08:21.445956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:21.722 [2024-11-21 04:08:21.446063] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:21.722 [2024-11-21 04:08:21.446097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:21.722 pt2 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.722 [2024-11-21 04:08:21.453261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.722 "name": "raid_bdev1", 00:10:21.722 "uuid": "a34aea81-000e-4e88-9130-579428148110", 00:10:21.722 "strip_size_kb": 64, 00:10:21.722 "state": "configuring", 00:10:21.722 "raid_level": "raid0", 00:10:21.722 "superblock": true, 00:10:21.722 "num_base_bdevs": 4, 00:10:21.722 "num_base_bdevs_discovered": 1, 00:10:21.722 "num_base_bdevs_operational": 4, 00:10:21.722 "base_bdevs_list": [ 00:10:21.722 { 00:10:21.722 "name": "pt1", 00:10:21.722 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:21.722 "is_configured": true, 00:10:21.722 "data_offset": 2048, 00:10:21.722 "data_size": 63488 00:10:21.722 }, 00:10:21.722 { 00:10:21.722 "name": null, 00:10:21.722 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.722 "is_configured": false, 00:10:21.722 "data_offset": 0, 00:10:21.722 "data_size": 63488 00:10:21.722 }, 00:10:21.722 { 00:10:21.722 "name": null, 00:10:21.722 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.722 "is_configured": false, 00:10:21.722 "data_offset": 2048, 00:10:21.722 "data_size": 63488 00:10:21.722 }, 00:10:21.722 { 00:10:21.722 "name": null, 00:10:21.722 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:21.722 "is_configured": false, 00:10:21.722 "data_offset": 2048, 00:10:21.722 "data_size": 63488 00:10:21.722 } 00:10:21.722 ] 00:10:21.722 }' 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.722 04:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.984 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:21.984 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:21.984 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:21.984 04:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.984 04:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.984 [2024-11-21 04:08:21.924480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:21.984 [2024-11-21 04:08:21.924574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.984 [2024-11-21 04:08:21.924599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:21.984 [2024-11-21 04:08:21.924612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.984 [2024-11-21 04:08:21.925121] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.984 [2024-11-21 04:08:21.925153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:21.984 [2024-11-21 04:08:21.925254] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:21.984 [2024-11-21 04:08:21.925286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:21.984 pt2 00:10:21.984 04:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.984 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:21.984 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:21.984 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:21.984 04:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.984 04:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.984 [2024-11-21 04:08:21.936372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:21.984 [2024-11-21 04:08:21.936447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.984 [2024-11-21 04:08:21.936467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:21.984 [2024-11-21 04:08:21.936478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.984 [2024-11-21 04:08:21.936923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.984 [2024-11-21 04:08:21.936953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:21.984 [2024-11-21 04:08:21.937020] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:21.984 [2024-11-21 04:08:21.937046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:21.984 pt3 00:10:21.984 04:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.984 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:21.984 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:21.984 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:21.984 04:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.984 04:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.984 [2024-11-21 04:08:21.948349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:21.984 [2024-11-21 04:08:21.948407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.984 [2024-11-21 04:08:21.948422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:21.984 [2024-11-21 04:08:21.948433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.984 [2024-11-21 04:08:21.948779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.984 [2024-11-21 04:08:21.948806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:21.984 [2024-11-21 04:08:21.948861] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:21.984 [2024-11-21 04:08:21.948882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:21.984 [2024-11-21 04:08:21.948997] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:21.984 [2024-11-21 04:08:21.949016] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:21.984 [2024-11-21 04:08:21.949331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:21.984 [2024-11-21 04:08:21.949481] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:21.984 [2024-11-21 04:08:21.949505] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:10:21.984 [2024-11-21 04:08:21.949612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.984 pt4 00:10:21.984 04:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.984 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:21.984 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:21.984 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:21.984 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.984 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.984 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.984 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.245 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.245 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.245 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.245 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.245 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.245 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.245 04:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.245 04:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.245 04:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.245 04:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.245 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.245 "name": "raid_bdev1", 00:10:22.245 "uuid": "a34aea81-000e-4e88-9130-579428148110", 00:10:22.245 "strip_size_kb": 64, 00:10:22.245 "state": "online", 00:10:22.245 "raid_level": "raid0", 00:10:22.245 "superblock": true, 00:10:22.245 "num_base_bdevs": 4, 00:10:22.245 "num_base_bdevs_discovered": 4, 00:10:22.245 "num_base_bdevs_operational": 4, 00:10:22.245 "base_bdevs_list": [ 00:10:22.245 { 00:10:22.245 "name": "pt1", 00:10:22.245 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.245 "is_configured": true, 00:10:22.245 "data_offset": 2048, 00:10:22.245 "data_size": 63488 00:10:22.245 }, 00:10:22.245 { 00:10:22.245 "name": "pt2", 00:10:22.245 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.246 "is_configured": true, 00:10:22.246 "data_offset": 2048, 00:10:22.246 "data_size": 63488 00:10:22.246 }, 00:10:22.246 { 00:10:22.246 "name": "pt3", 00:10:22.246 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.246 "is_configured": true, 00:10:22.246 "data_offset": 2048, 00:10:22.246 "data_size": 63488 00:10:22.246 }, 00:10:22.246 { 00:10:22.246 "name": "pt4", 00:10:22.246 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:22.246 "is_configured": true, 00:10:22.246 "data_offset": 2048, 00:10:22.246 "data_size": 63488 00:10:22.246 } 00:10:22.246 ] 00:10:22.246 }' 00:10:22.246 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.246 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.506 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:22.506 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:22.506 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:22.506 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:22.506 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:22.506 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:22.506 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:22.506 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:22.506 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.506 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.506 [2024-11-21 04:08:22.455944] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.506 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.767 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:22.767 "name": "raid_bdev1", 00:10:22.767 "aliases": [ 00:10:22.767 "a34aea81-000e-4e88-9130-579428148110" 00:10:22.767 ], 00:10:22.767 "product_name": "Raid Volume", 00:10:22.767 "block_size": 512, 00:10:22.767 "num_blocks": 253952, 00:10:22.767 "uuid": "a34aea81-000e-4e88-9130-579428148110", 00:10:22.767 "assigned_rate_limits": { 00:10:22.767 "rw_ios_per_sec": 0, 00:10:22.767 "rw_mbytes_per_sec": 0, 00:10:22.767 "r_mbytes_per_sec": 0, 00:10:22.767 "w_mbytes_per_sec": 0 00:10:22.767 }, 00:10:22.767 "claimed": false, 00:10:22.767 "zoned": false, 00:10:22.767 "supported_io_types": { 00:10:22.767 "read": true, 00:10:22.767 "write": true, 00:10:22.767 "unmap": true, 00:10:22.767 "flush": true, 00:10:22.767 "reset": true, 00:10:22.767 "nvme_admin": false, 00:10:22.767 "nvme_io": false, 00:10:22.767 "nvme_io_md": false, 00:10:22.767 "write_zeroes": true, 00:10:22.767 "zcopy": false, 00:10:22.767 "get_zone_info": false, 00:10:22.767 "zone_management": false, 00:10:22.767 "zone_append": false, 00:10:22.767 "compare": false, 00:10:22.767 "compare_and_write": false, 00:10:22.767 "abort": false, 00:10:22.767 "seek_hole": false, 00:10:22.767 "seek_data": false, 00:10:22.767 "copy": false, 00:10:22.767 "nvme_iov_md": false 00:10:22.767 }, 00:10:22.767 "memory_domains": [ 00:10:22.767 { 00:10:22.767 "dma_device_id": "system", 00:10:22.767 "dma_device_type": 1 00:10:22.767 }, 00:10:22.767 { 00:10:22.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.768 "dma_device_type": 2 00:10:22.768 }, 00:10:22.768 { 00:10:22.768 "dma_device_id": "system", 00:10:22.768 "dma_device_type": 1 00:10:22.768 }, 00:10:22.768 { 00:10:22.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.768 "dma_device_type": 2 00:10:22.768 }, 00:10:22.768 { 00:10:22.768 "dma_device_id": "system", 00:10:22.768 "dma_device_type": 1 00:10:22.768 }, 00:10:22.768 { 00:10:22.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.768 "dma_device_type": 2 00:10:22.768 }, 00:10:22.768 { 00:10:22.768 "dma_device_id": "system", 00:10:22.768 "dma_device_type": 1 00:10:22.768 }, 00:10:22.768 { 00:10:22.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.768 "dma_device_type": 2 00:10:22.768 } 00:10:22.768 ], 00:10:22.768 "driver_specific": { 00:10:22.768 "raid": { 00:10:22.768 "uuid": "a34aea81-000e-4e88-9130-579428148110", 00:10:22.768 "strip_size_kb": 64, 00:10:22.768 "state": "online", 00:10:22.768 "raid_level": "raid0", 00:10:22.768 "superblock": true, 00:10:22.768 "num_base_bdevs": 4, 00:10:22.768 "num_base_bdevs_discovered": 4, 00:10:22.768 "num_base_bdevs_operational": 4, 00:10:22.768 "base_bdevs_list": [ 00:10:22.768 { 00:10:22.768 "name": "pt1", 00:10:22.768 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.768 "is_configured": true, 00:10:22.768 "data_offset": 2048, 00:10:22.768 "data_size": 63488 00:10:22.768 }, 00:10:22.768 { 00:10:22.768 "name": "pt2", 00:10:22.768 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.768 "is_configured": true, 00:10:22.768 "data_offset": 2048, 00:10:22.768 "data_size": 63488 00:10:22.768 }, 00:10:22.768 { 00:10:22.768 "name": "pt3", 00:10:22.768 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.768 "is_configured": true, 00:10:22.768 "data_offset": 2048, 00:10:22.768 "data_size": 63488 00:10:22.768 }, 00:10:22.768 { 00:10:22.768 "name": "pt4", 00:10:22.768 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:22.768 "is_configured": true, 00:10:22.768 "data_offset": 2048, 00:10:22.768 "data_size": 63488 00:10:22.768 } 00:10:22.768 ] 00:10:22.768 } 00:10:22.768 } 00:10:22.768 }' 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:22.768 pt2 00:10:22.768 pt3 00:10:22.768 pt4' 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.768 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.028 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:23.028 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.028 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:23.028 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.028 [2024-11-21 04:08:22.747387] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.028 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.028 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a34aea81-000e-4e88-9130-579428148110 '!=' a34aea81-000e-4e88-9130-579428148110 ']' 00:10:23.028 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:23.028 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:23.028 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:23.028 04:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81678 00:10:23.028 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81678 ']' 00:10:23.028 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81678 00:10:23.028 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:23.029 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:23.029 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81678 00:10:23.029 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:23.029 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:23.029 killing process with pid 81678 00:10:23.029 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81678' 00:10:23.029 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 81678 00:10:23.029 [2024-11-21 04:08:22.815747] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:23.029 04:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 81678 00:10:23.029 [2024-11-21 04:08:22.815888] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.029 [2024-11-21 04:08:22.815981] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.029 [2024-11-21 04:08:22.815999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:10:23.029 [2024-11-21 04:08:22.900145] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.289 04:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:23.289 00:10:23.289 real 0m4.365s 00:10:23.289 user 0m6.651s 00:10:23.289 sys 0m1.078s 00:10:23.289 04:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.289 04:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.289 ************************************ 00:10:23.289 END TEST raid_superblock_test 00:10:23.289 ************************************ 00:10:23.549 04:08:23 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:23.549 04:08:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:23.549 04:08:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.549 04:08:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.549 ************************************ 00:10:23.549 START TEST raid_read_error_test 00:10:23.549 ************************************ 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.odZq7qFkyM 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81926 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81926 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 81926 ']' 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.549 04:08:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.549 [2024-11-21 04:08:23.418772] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:10:23.549 [2024-11-21 04:08:23.418975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81926 ] 00:10:23.809 [2024-11-21 04:08:23.556347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.809 [2024-11-21 04:08:23.597555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.809 [2024-11-21 04:08:23.674471] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.809 [2024-11-21 04:08:23.674515] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.380 BaseBdev1_malloc 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.380 true 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.380 [2024-11-21 04:08:24.308984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:24.380 [2024-11-21 04:08:24.309047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.380 [2024-11-21 04:08:24.309068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:24.380 [2024-11-21 04:08:24.309078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.380 [2024-11-21 04:08:24.311596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.380 [2024-11-21 04:08:24.311636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:24.380 BaseBdev1 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.380 BaseBdev2_malloc 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.380 true 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.380 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.678 [2024-11-21 04:08:24.355730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:24.678 [2024-11-21 04:08:24.355786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.678 [2024-11-21 04:08:24.355807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:24.678 [2024-11-21 04:08:24.355827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.678 [2024-11-21 04:08:24.358353] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.679 [2024-11-21 04:08:24.358389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:24.679 BaseBdev2 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.679 BaseBdev3_malloc 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.679 true 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.679 [2024-11-21 04:08:24.402865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:24.679 [2024-11-21 04:08:24.402918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.679 [2024-11-21 04:08:24.402940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:24.679 [2024-11-21 04:08:24.402949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.679 [2024-11-21 04:08:24.405417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.679 [2024-11-21 04:08:24.405452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:24.679 BaseBdev3 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.679 BaseBdev4_malloc 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.679 true 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.679 [2024-11-21 04:08:24.459192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:24.679 [2024-11-21 04:08:24.459263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.679 [2024-11-21 04:08:24.459293] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:24.679 [2024-11-21 04:08:24.459304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.679 [2024-11-21 04:08:24.461737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.679 [2024-11-21 04:08:24.461775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:24.679 BaseBdev4 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.679 [2024-11-21 04:08:24.471247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:24.679 [2024-11-21 04:08:24.473454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:24.679 [2024-11-21 04:08:24.473539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:24.679 [2024-11-21 04:08:24.473600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:24.679 [2024-11-21 04:08:24.473839] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:24.679 [2024-11-21 04:08:24.473859] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:24.679 [2024-11-21 04:08:24.474160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:10:24.679 [2024-11-21 04:08:24.474360] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:24.679 [2024-11-21 04:08:24.474383] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:24.679 [2024-11-21 04:08:24.474535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.679 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.679 "name": "raid_bdev1", 00:10:24.679 "uuid": "83fb9a90-cfb9-4919-a653-c364f08543da", 00:10:24.679 "strip_size_kb": 64, 00:10:24.679 "state": "online", 00:10:24.679 "raid_level": "raid0", 00:10:24.679 "superblock": true, 00:10:24.679 "num_base_bdevs": 4, 00:10:24.679 "num_base_bdevs_discovered": 4, 00:10:24.679 "num_base_bdevs_operational": 4, 00:10:24.679 "base_bdevs_list": [ 00:10:24.679 { 00:10:24.679 "name": "BaseBdev1", 00:10:24.679 "uuid": "67f22a51-5bc8-587e-9087-3e57330e7952", 00:10:24.679 "is_configured": true, 00:10:24.679 "data_offset": 2048, 00:10:24.679 "data_size": 63488 00:10:24.679 }, 00:10:24.679 { 00:10:24.679 "name": "BaseBdev2", 00:10:24.679 "uuid": "3438bdaa-b4eb-5b93-8158-8b3e6bfa2f44", 00:10:24.679 "is_configured": true, 00:10:24.679 "data_offset": 2048, 00:10:24.680 "data_size": 63488 00:10:24.680 }, 00:10:24.680 { 00:10:24.680 "name": "BaseBdev3", 00:10:24.680 "uuid": "9856bd66-2923-5270-8be3-56dbe1dde165", 00:10:24.680 "is_configured": true, 00:10:24.680 "data_offset": 2048, 00:10:24.680 "data_size": 63488 00:10:24.680 }, 00:10:24.680 { 00:10:24.680 "name": "BaseBdev4", 00:10:24.680 "uuid": "1fcbf8a9-e00f-5b04-9b2e-7644eae3faba", 00:10:24.680 "is_configured": true, 00:10:24.680 "data_offset": 2048, 00:10:24.680 "data_size": 63488 00:10:24.680 } 00:10:24.680 ] 00:10:24.680 }' 00:10:24.680 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.680 04:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.250 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:25.250 04:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:25.250 [2024-11-21 04:08:25.026813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:10:26.192 04:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:26.192 04:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.192 04:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.192 04:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.192 04:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:26.192 04:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:26.192 04:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:26.192 04:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:26.192 04:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.192 04:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.192 04:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.192 04:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.192 04:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.192 04:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.192 04:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.192 04:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.192 04:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.192 04:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.192 04:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.192 04:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.192 04:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.192 04:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.192 04:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.192 "name": "raid_bdev1", 00:10:26.192 "uuid": "83fb9a90-cfb9-4919-a653-c364f08543da", 00:10:26.192 "strip_size_kb": 64, 00:10:26.192 "state": "online", 00:10:26.192 "raid_level": "raid0", 00:10:26.192 "superblock": true, 00:10:26.192 "num_base_bdevs": 4, 00:10:26.192 "num_base_bdevs_discovered": 4, 00:10:26.192 "num_base_bdevs_operational": 4, 00:10:26.192 "base_bdevs_list": [ 00:10:26.192 { 00:10:26.192 "name": "BaseBdev1", 00:10:26.192 "uuid": "67f22a51-5bc8-587e-9087-3e57330e7952", 00:10:26.192 "is_configured": true, 00:10:26.193 "data_offset": 2048, 00:10:26.193 "data_size": 63488 00:10:26.193 }, 00:10:26.193 { 00:10:26.193 "name": "BaseBdev2", 00:10:26.193 "uuid": "3438bdaa-b4eb-5b93-8158-8b3e6bfa2f44", 00:10:26.193 "is_configured": true, 00:10:26.193 "data_offset": 2048, 00:10:26.193 "data_size": 63488 00:10:26.193 }, 00:10:26.193 { 00:10:26.193 "name": "BaseBdev3", 00:10:26.193 "uuid": "9856bd66-2923-5270-8be3-56dbe1dde165", 00:10:26.193 "is_configured": true, 00:10:26.193 "data_offset": 2048, 00:10:26.193 "data_size": 63488 00:10:26.193 }, 00:10:26.193 { 00:10:26.193 "name": "BaseBdev4", 00:10:26.193 "uuid": "1fcbf8a9-e00f-5b04-9b2e-7644eae3faba", 00:10:26.193 "is_configured": true, 00:10:26.193 "data_offset": 2048, 00:10:26.193 "data_size": 63488 00:10:26.193 } 00:10:26.193 ] 00:10:26.193 }' 00:10:26.193 04:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.193 04:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.453 04:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:26.453 04:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.453 04:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.453 [2024-11-21 04:08:26.391289] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:26.453 [2024-11-21 04:08:26.391333] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:26.453 [2024-11-21 04:08:26.393885] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.453 [2024-11-21 04:08:26.393949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.453 [2024-11-21 04:08:26.394002] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:26.453 [2024-11-21 04:08:26.394012] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:26.453 { 00:10:26.453 "results": [ 00:10:26.453 { 00:10:26.453 "job": "raid_bdev1", 00:10:26.453 "core_mask": "0x1", 00:10:26.453 "workload": "randrw", 00:10:26.453 "percentage": 50, 00:10:26.453 "status": "finished", 00:10:26.453 "queue_depth": 1, 00:10:26.453 "io_size": 131072, 00:10:26.453 "runtime": 1.364989, 00:10:26.453 "iops": 14260.188177340624, 00:10:26.453 "mibps": 1782.523522167578, 00:10:26.453 "io_failed": 1, 00:10:26.453 "io_timeout": 0, 00:10:26.453 "avg_latency_us": 98.58897901480445, 00:10:26.453 "min_latency_us": 25.041048034934498, 00:10:26.453 "max_latency_us": 1402.2986899563318 00:10:26.453 } 00:10:26.453 ], 00:10:26.453 "core_count": 1 00:10:26.453 } 00:10:26.453 04:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.453 04:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81926 00:10:26.453 04:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 81926 ']' 00:10:26.453 04:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 81926 00:10:26.453 04:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:26.453 04:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:26.453 04:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81926 00:10:26.713 04:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:26.713 04:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:26.713 killing process with pid 81926 00:10:26.713 04:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81926' 00:10:26.713 04:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 81926 00:10:26.713 [2024-11-21 04:08:26.442001] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:26.713 04:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 81926 00:10:26.713 [2024-11-21 04:08:26.511253] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:26.972 04:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.odZq7qFkyM 00:10:26.972 04:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:26.972 04:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:26.972 04:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:26.972 04:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:26.972 04:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:26.972 04:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:26.972 04:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:26.972 00:10:26.972 real 0m3.540s 00:10:26.972 user 0m4.335s 00:10:26.972 sys 0m0.669s 00:10:26.972 04:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.972 ************************************ 00:10:26.972 04:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.972 END TEST raid_read_error_test 00:10:26.972 ************************************ 00:10:26.972 04:08:26 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:26.972 04:08:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:26.972 04:08:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.973 04:08:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:26.973 ************************************ 00:10:26.973 START TEST raid_write_error_test 00:10:26.973 ************************************ 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:26.973 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aWqcBh6nN3 00:10:27.233 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82061 00:10:27.233 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82061 00:10:27.233 04:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:27.233 04:08:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 82061 ']' 00:10:27.233 04:08:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.233 04:08:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.233 04:08:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.233 04:08:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.233 04:08:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.233 [2024-11-21 04:08:27.034348] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:10:27.233 [2024-11-21 04:08:27.034504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82061 ] 00:10:27.233 [2024-11-21 04:08:27.189615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.493 [2024-11-21 04:08:27.230710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.493 [2024-11-21 04:08:27.309237] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.493 [2024-11-21 04:08:27.309291] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.062 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:28.062 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:28.062 04:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.062 04:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:28.062 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.062 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.063 BaseBdev1_malloc 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.063 true 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.063 [2024-11-21 04:08:27.901176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:28.063 [2024-11-21 04:08:27.901258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.063 [2024-11-21 04:08:27.901286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:28.063 [2024-11-21 04:08:27.901297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.063 [2024-11-21 04:08:27.903729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.063 [2024-11-21 04:08:27.903763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:28.063 BaseBdev1 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.063 BaseBdev2_malloc 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.063 true 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.063 [2024-11-21 04:08:27.948295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:28.063 [2024-11-21 04:08:27.948341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.063 [2024-11-21 04:08:27.948362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:28.063 [2024-11-21 04:08:27.948379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.063 [2024-11-21 04:08:27.950836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.063 [2024-11-21 04:08:27.950873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:28.063 BaseBdev2 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.063 BaseBdev3_malloc 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.063 true 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.063 [2024-11-21 04:08:27.995351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:28.063 [2024-11-21 04:08:27.995399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.063 [2024-11-21 04:08:27.995418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:28.063 [2024-11-21 04:08:27.995427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.063 [2024-11-21 04:08:27.997861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.063 [2024-11-21 04:08:27.997894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:28.063 BaseBdev3 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.063 04:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:28.063 04:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.063 04:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.063 BaseBdev4_malloc 00:10:28.063 04:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.063 04:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:28.063 04:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.063 04:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.324 true 00:10:28.324 04:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.324 04:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:28.324 04:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.324 04:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.324 [2024-11-21 04:08:28.050014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:28.324 [2024-11-21 04:08:28.050065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.324 [2024-11-21 04:08:28.050089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:28.324 [2024-11-21 04:08:28.050098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.324 [2024-11-21 04:08:28.052510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.324 [2024-11-21 04:08:28.052546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:28.324 BaseBdev4 00:10:28.324 04:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.324 04:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:28.324 04:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.324 04:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.324 [2024-11-21 04:08:28.062059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:28.324 [2024-11-21 04:08:28.064301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:28.324 [2024-11-21 04:08:28.064411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:28.324 [2024-11-21 04:08:28.064470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:28.324 [2024-11-21 04:08:28.064693] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:28.324 [2024-11-21 04:08:28.064712] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:28.324 [2024-11-21 04:08:28.065009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:10:28.324 [2024-11-21 04:08:28.065167] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:28.324 [2024-11-21 04:08:28.065198] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:28.324 [2024-11-21 04:08:28.065326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.324 04:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.324 04:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:28.324 04:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.324 04:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.324 04:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.324 04:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.324 04:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.324 04:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.324 04:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.324 04:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.324 04:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.324 04:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.324 04:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.324 04:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.324 04:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.324 04:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.324 04:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.324 "name": "raid_bdev1", 00:10:28.324 "uuid": "710be4c5-2156-4ca3-b49d-c921155ac0f7", 00:10:28.324 "strip_size_kb": 64, 00:10:28.324 "state": "online", 00:10:28.324 "raid_level": "raid0", 00:10:28.324 "superblock": true, 00:10:28.324 "num_base_bdevs": 4, 00:10:28.324 "num_base_bdevs_discovered": 4, 00:10:28.324 "num_base_bdevs_operational": 4, 00:10:28.324 "base_bdevs_list": [ 00:10:28.324 { 00:10:28.324 "name": "BaseBdev1", 00:10:28.324 "uuid": "b50b2e77-9525-5aee-83c5-fcc5f927c482", 00:10:28.324 "is_configured": true, 00:10:28.324 "data_offset": 2048, 00:10:28.324 "data_size": 63488 00:10:28.324 }, 00:10:28.324 { 00:10:28.324 "name": "BaseBdev2", 00:10:28.324 "uuid": "71eaf340-9345-56a8-8806-59dfb104ed42", 00:10:28.324 "is_configured": true, 00:10:28.324 "data_offset": 2048, 00:10:28.324 "data_size": 63488 00:10:28.324 }, 00:10:28.324 { 00:10:28.324 "name": "BaseBdev3", 00:10:28.325 "uuid": "089b3a96-9da3-5c32-8b10-9bef23ababbd", 00:10:28.325 "is_configured": true, 00:10:28.325 "data_offset": 2048, 00:10:28.325 "data_size": 63488 00:10:28.325 }, 00:10:28.325 { 00:10:28.325 "name": "BaseBdev4", 00:10:28.325 "uuid": "19f8f51e-1683-5b41-a835-9989227c03ac", 00:10:28.325 "is_configured": true, 00:10:28.325 "data_offset": 2048, 00:10:28.325 "data_size": 63488 00:10:28.325 } 00:10:28.325 ] 00:10:28.325 }' 00:10:28.325 04:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.325 04:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.584 04:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:28.584 04:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:28.844 [2024-11-21 04:08:28.581712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:10:29.793 04:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:29.793 04:08:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.793 04:08:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.793 04:08:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.793 04:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:29.793 04:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:29.793 04:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:29.793 04:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:29.793 04:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.793 04:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.793 04:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.793 04:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.793 04:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.793 04:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.793 04:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.793 04:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.793 04:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.793 04:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.793 04:08:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.793 04:08:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.793 04:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.793 04:08:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.793 04:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.793 "name": "raid_bdev1", 00:10:29.793 "uuid": "710be4c5-2156-4ca3-b49d-c921155ac0f7", 00:10:29.793 "strip_size_kb": 64, 00:10:29.793 "state": "online", 00:10:29.793 "raid_level": "raid0", 00:10:29.793 "superblock": true, 00:10:29.793 "num_base_bdevs": 4, 00:10:29.793 "num_base_bdevs_discovered": 4, 00:10:29.793 "num_base_bdevs_operational": 4, 00:10:29.793 "base_bdevs_list": [ 00:10:29.793 { 00:10:29.793 "name": "BaseBdev1", 00:10:29.793 "uuid": "b50b2e77-9525-5aee-83c5-fcc5f927c482", 00:10:29.793 "is_configured": true, 00:10:29.793 "data_offset": 2048, 00:10:29.793 "data_size": 63488 00:10:29.793 }, 00:10:29.793 { 00:10:29.793 "name": "BaseBdev2", 00:10:29.793 "uuid": "71eaf340-9345-56a8-8806-59dfb104ed42", 00:10:29.793 "is_configured": true, 00:10:29.793 "data_offset": 2048, 00:10:29.793 "data_size": 63488 00:10:29.793 }, 00:10:29.793 { 00:10:29.793 "name": "BaseBdev3", 00:10:29.793 "uuid": "089b3a96-9da3-5c32-8b10-9bef23ababbd", 00:10:29.793 "is_configured": true, 00:10:29.793 "data_offset": 2048, 00:10:29.793 "data_size": 63488 00:10:29.793 }, 00:10:29.793 { 00:10:29.793 "name": "BaseBdev4", 00:10:29.793 "uuid": "19f8f51e-1683-5b41-a835-9989227c03ac", 00:10:29.793 "is_configured": true, 00:10:29.793 "data_offset": 2048, 00:10:29.793 "data_size": 63488 00:10:29.793 } 00:10:29.793 ] 00:10:29.793 }' 00:10:29.793 04:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.793 04:08:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.054 04:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:30.054 04:08:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.054 04:08:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.054 [2024-11-21 04:08:29.962549] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:30.054 [2024-11-21 04:08:29.962591] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.054 [2024-11-21 04:08:29.965179] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.054 [2024-11-21 04:08:29.965267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.054 [2024-11-21 04:08:29.965322] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.054 [2024-11-21 04:08:29.965332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:30.054 { 00:10:30.054 "results": [ 00:10:30.054 { 00:10:30.054 "job": "raid_bdev1", 00:10:30.054 "core_mask": "0x1", 00:10:30.054 "workload": "randrw", 00:10:30.054 "percentage": 50, 00:10:30.054 "status": "finished", 00:10:30.054 "queue_depth": 1, 00:10:30.054 "io_size": 131072, 00:10:30.054 "runtime": 1.381248, 00:10:30.054 "iops": 14413.052543786489, 00:10:30.054 "mibps": 1801.6315679733111, 00:10:30.054 "io_failed": 1, 00:10:30.054 "io_timeout": 0, 00:10:30.054 "avg_latency_us": 97.62409811805287, 00:10:30.054 "min_latency_us": 25.4882096069869, 00:10:30.054 "max_latency_us": 1502.46288209607 00:10:30.054 } 00:10:30.054 ], 00:10:30.054 "core_count": 1 00:10:30.054 } 00:10:30.054 04:08:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.054 04:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82061 00:10:30.054 04:08:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 82061 ']' 00:10:30.054 04:08:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 82061 00:10:30.054 04:08:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:30.054 04:08:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.054 04:08:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82061 00:10:30.054 04:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.054 04:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.054 killing process with pid 82061 00:10:30.054 04:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82061' 00:10:30.054 04:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 82061 00:10:30.054 [2024-11-21 04:08:30.007158] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:30.054 04:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 82061 00:10:30.314 [2024-11-21 04:08:30.076753] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:30.574 04:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aWqcBh6nN3 00:10:30.574 04:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:30.574 04:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:30.574 04:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:30.574 04:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:30.574 04:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:30.574 04:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:30.574 04:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:30.574 00:10:30.574 real 0m3.490s 00:10:30.574 user 0m4.228s 00:10:30.574 sys 0m0.665s 00:10:30.574 04:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.574 04:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.574 ************************************ 00:10:30.574 END TEST raid_write_error_test 00:10:30.574 ************************************ 00:10:30.574 04:08:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:30.574 04:08:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:30.574 04:08:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:30.574 04:08:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.574 04:08:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:30.574 ************************************ 00:10:30.574 START TEST raid_state_function_test 00:10:30.574 ************************************ 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:30.574 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82188 00:10:30.575 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:30.575 Process raid pid: 82188 00:10:30.575 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82188' 00:10:30.575 04:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82188 00:10:30.575 04:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82188 ']' 00:10:30.575 04:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.575 04:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:30.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.575 04:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.575 04:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:30.575 04:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.842 [2024-11-21 04:08:30.591229] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:10:30.842 [2024-11-21 04:08:30.591390] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.842 [2024-11-21 04:08:30.748448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.842 [2024-11-21 04:08:30.787803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.116 [2024-11-21 04:08:30.865091] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.116 [2024-11-21 04:08:30.865140] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.686 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.686 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:31.687 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:31.687 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.687 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.687 [2024-11-21 04:08:31.432674] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:31.687 [2024-11-21 04:08:31.432745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:31.687 [2024-11-21 04:08:31.432763] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:31.687 [2024-11-21 04:08:31.432774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:31.687 [2024-11-21 04:08:31.432780] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:31.687 [2024-11-21 04:08:31.432792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:31.687 [2024-11-21 04:08:31.432798] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:31.687 [2024-11-21 04:08:31.432807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:31.687 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.687 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:31.687 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.687 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.687 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.687 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.687 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.687 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.687 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.687 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.687 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.687 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.687 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.687 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.687 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.687 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.687 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.687 "name": "Existed_Raid", 00:10:31.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.687 "strip_size_kb": 64, 00:10:31.687 "state": "configuring", 00:10:31.687 "raid_level": "concat", 00:10:31.687 "superblock": false, 00:10:31.687 "num_base_bdevs": 4, 00:10:31.687 "num_base_bdevs_discovered": 0, 00:10:31.687 "num_base_bdevs_operational": 4, 00:10:31.687 "base_bdevs_list": [ 00:10:31.687 { 00:10:31.687 "name": "BaseBdev1", 00:10:31.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.687 "is_configured": false, 00:10:31.687 "data_offset": 0, 00:10:31.687 "data_size": 0 00:10:31.687 }, 00:10:31.687 { 00:10:31.687 "name": "BaseBdev2", 00:10:31.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.687 "is_configured": false, 00:10:31.687 "data_offset": 0, 00:10:31.687 "data_size": 0 00:10:31.687 }, 00:10:31.687 { 00:10:31.687 "name": "BaseBdev3", 00:10:31.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.687 "is_configured": false, 00:10:31.687 "data_offset": 0, 00:10:31.687 "data_size": 0 00:10:31.687 }, 00:10:31.687 { 00:10:31.687 "name": "BaseBdev4", 00:10:31.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.687 "is_configured": false, 00:10:31.687 "data_offset": 0, 00:10:31.687 "data_size": 0 00:10:31.687 } 00:10:31.687 ] 00:10:31.687 }' 00:10:31.687 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.687 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.948 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:31.948 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.948 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.948 [2024-11-21 04:08:31.780025] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:31.948 [2024-11-21 04:08:31.780080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:31.948 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.948 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:31.948 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.948 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.948 [2024-11-21 04:08:31.788006] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:31.948 [2024-11-21 04:08:31.788060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:31.948 [2024-11-21 04:08:31.788070] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:31.948 [2024-11-21 04:08:31.788081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:31.948 [2024-11-21 04:08:31.788087] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:31.948 [2024-11-21 04:08:31.788096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:31.948 [2024-11-21 04:08:31.788102] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:31.948 [2024-11-21 04:08:31.788111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:31.948 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.948 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:31.948 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.948 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.948 [2024-11-21 04:08:31.811089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.948 BaseBdev1 00:10:31.948 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.948 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:31.948 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:31.948 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:31.948 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:31.948 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:31.948 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:31.948 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:31.948 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.948 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.948 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.948 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:31.948 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.948 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.948 [ 00:10:31.948 { 00:10:31.948 "name": "BaseBdev1", 00:10:31.948 "aliases": [ 00:10:31.948 "f8765506-60bf-4040-9feb-8c6b0e21d39e" 00:10:31.948 ], 00:10:31.948 "product_name": "Malloc disk", 00:10:31.948 "block_size": 512, 00:10:31.948 "num_blocks": 65536, 00:10:31.948 "uuid": "f8765506-60bf-4040-9feb-8c6b0e21d39e", 00:10:31.948 "assigned_rate_limits": { 00:10:31.948 "rw_ios_per_sec": 0, 00:10:31.948 "rw_mbytes_per_sec": 0, 00:10:31.948 "r_mbytes_per_sec": 0, 00:10:31.948 "w_mbytes_per_sec": 0 00:10:31.948 }, 00:10:31.948 "claimed": true, 00:10:31.948 "claim_type": "exclusive_write", 00:10:31.948 "zoned": false, 00:10:31.948 "supported_io_types": { 00:10:31.948 "read": true, 00:10:31.948 "write": true, 00:10:31.948 "unmap": true, 00:10:31.948 "flush": true, 00:10:31.948 "reset": true, 00:10:31.948 "nvme_admin": false, 00:10:31.948 "nvme_io": false, 00:10:31.948 "nvme_io_md": false, 00:10:31.948 "write_zeroes": true, 00:10:31.948 "zcopy": true, 00:10:31.948 "get_zone_info": false, 00:10:31.948 "zone_management": false, 00:10:31.948 "zone_append": false, 00:10:31.948 "compare": false, 00:10:31.948 "compare_and_write": false, 00:10:31.948 "abort": true, 00:10:31.948 "seek_hole": false, 00:10:31.948 "seek_data": false, 00:10:31.948 "copy": true, 00:10:31.948 "nvme_iov_md": false 00:10:31.948 }, 00:10:31.948 "memory_domains": [ 00:10:31.948 { 00:10:31.948 "dma_device_id": "system", 00:10:31.948 "dma_device_type": 1 00:10:31.948 }, 00:10:31.948 { 00:10:31.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.948 "dma_device_type": 2 00:10:31.948 } 00:10:31.948 ], 00:10:31.949 "driver_specific": {} 00:10:31.949 } 00:10:31.949 ] 00:10:31.949 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.949 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:31.949 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:31.949 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.949 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.949 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.949 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.949 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.949 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.949 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.949 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.949 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.949 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.949 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.949 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.949 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.949 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.949 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.949 "name": "Existed_Raid", 00:10:31.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.949 "strip_size_kb": 64, 00:10:31.949 "state": "configuring", 00:10:31.949 "raid_level": "concat", 00:10:31.949 "superblock": false, 00:10:31.949 "num_base_bdevs": 4, 00:10:31.949 "num_base_bdevs_discovered": 1, 00:10:31.949 "num_base_bdevs_operational": 4, 00:10:31.949 "base_bdevs_list": [ 00:10:31.949 { 00:10:31.949 "name": "BaseBdev1", 00:10:31.949 "uuid": "f8765506-60bf-4040-9feb-8c6b0e21d39e", 00:10:31.949 "is_configured": true, 00:10:31.949 "data_offset": 0, 00:10:31.949 "data_size": 65536 00:10:31.949 }, 00:10:31.949 { 00:10:31.949 "name": "BaseBdev2", 00:10:31.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.949 "is_configured": false, 00:10:31.949 "data_offset": 0, 00:10:31.949 "data_size": 0 00:10:31.949 }, 00:10:31.949 { 00:10:31.949 "name": "BaseBdev3", 00:10:31.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.949 "is_configured": false, 00:10:31.949 "data_offset": 0, 00:10:31.949 "data_size": 0 00:10:31.949 }, 00:10:31.949 { 00:10:31.949 "name": "BaseBdev4", 00:10:31.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.949 "is_configured": false, 00:10:31.949 "data_offset": 0, 00:10:31.949 "data_size": 0 00:10:31.949 } 00:10:31.949 ] 00:10:31.949 }' 00:10:31.949 04:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.949 04:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.520 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:32.520 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.520 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.520 [2024-11-21 04:08:32.194509] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:32.520 [2024-11-21 04:08:32.194584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:32.520 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.520 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:32.520 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.520 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.520 [2024-11-21 04:08:32.206522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.520 [2024-11-21 04:08:32.208690] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:32.520 [2024-11-21 04:08:32.208735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:32.520 [2024-11-21 04:08:32.208746] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:32.520 [2024-11-21 04:08:32.208754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:32.520 [2024-11-21 04:08:32.208761] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:32.520 [2024-11-21 04:08:32.208769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:32.520 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.520 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:32.520 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:32.520 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:32.520 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.520 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.520 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.520 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.520 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.520 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.521 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.521 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.521 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.521 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.521 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.521 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.521 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.521 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.521 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.521 "name": "Existed_Raid", 00:10:32.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.521 "strip_size_kb": 64, 00:10:32.521 "state": "configuring", 00:10:32.521 "raid_level": "concat", 00:10:32.521 "superblock": false, 00:10:32.521 "num_base_bdevs": 4, 00:10:32.521 "num_base_bdevs_discovered": 1, 00:10:32.521 "num_base_bdevs_operational": 4, 00:10:32.521 "base_bdevs_list": [ 00:10:32.521 { 00:10:32.521 "name": "BaseBdev1", 00:10:32.521 "uuid": "f8765506-60bf-4040-9feb-8c6b0e21d39e", 00:10:32.521 "is_configured": true, 00:10:32.521 "data_offset": 0, 00:10:32.521 "data_size": 65536 00:10:32.521 }, 00:10:32.521 { 00:10:32.521 "name": "BaseBdev2", 00:10:32.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.521 "is_configured": false, 00:10:32.521 "data_offset": 0, 00:10:32.521 "data_size": 0 00:10:32.521 }, 00:10:32.521 { 00:10:32.521 "name": "BaseBdev3", 00:10:32.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.521 "is_configured": false, 00:10:32.521 "data_offset": 0, 00:10:32.521 "data_size": 0 00:10:32.521 }, 00:10:32.521 { 00:10:32.521 "name": "BaseBdev4", 00:10:32.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.521 "is_configured": false, 00:10:32.521 "data_offset": 0, 00:10:32.521 "data_size": 0 00:10:32.521 } 00:10:32.521 ] 00:10:32.521 }' 00:10:32.521 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.521 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.781 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:32.781 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.781 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.781 [2024-11-21 04:08:32.658739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:32.781 BaseBdev2 00:10:32.781 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.782 [ 00:10:32.782 { 00:10:32.782 "name": "BaseBdev2", 00:10:32.782 "aliases": [ 00:10:32.782 "1b0437ac-373b-45ce-83fd-1cb088f39870" 00:10:32.782 ], 00:10:32.782 "product_name": "Malloc disk", 00:10:32.782 "block_size": 512, 00:10:32.782 "num_blocks": 65536, 00:10:32.782 "uuid": "1b0437ac-373b-45ce-83fd-1cb088f39870", 00:10:32.782 "assigned_rate_limits": { 00:10:32.782 "rw_ios_per_sec": 0, 00:10:32.782 "rw_mbytes_per_sec": 0, 00:10:32.782 "r_mbytes_per_sec": 0, 00:10:32.782 "w_mbytes_per_sec": 0 00:10:32.782 }, 00:10:32.782 "claimed": true, 00:10:32.782 "claim_type": "exclusive_write", 00:10:32.782 "zoned": false, 00:10:32.782 "supported_io_types": { 00:10:32.782 "read": true, 00:10:32.782 "write": true, 00:10:32.782 "unmap": true, 00:10:32.782 "flush": true, 00:10:32.782 "reset": true, 00:10:32.782 "nvme_admin": false, 00:10:32.782 "nvme_io": false, 00:10:32.782 "nvme_io_md": false, 00:10:32.782 "write_zeroes": true, 00:10:32.782 "zcopy": true, 00:10:32.782 "get_zone_info": false, 00:10:32.782 "zone_management": false, 00:10:32.782 "zone_append": false, 00:10:32.782 "compare": false, 00:10:32.782 "compare_and_write": false, 00:10:32.782 "abort": true, 00:10:32.782 "seek_hole": false, 00:10:32.782 "seek_data": false, 00:10:32.782 "copy": true, 00:10:32.782 "nvme_iov_md": false 00:10:32.782 }, 00:10:32.782 "memory_domains": [ 00:10:32.782 { 00:10:32.782 "dma_device_id": "system", 00:10:32.782 "dma_device_type": 1 00:10:32.782 }, 00:10:32.782 { 00:10:32.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.782 "dma_device_type": 2 00:10:32.782 } 00:10:32.782 ], 00:10:32.782 "driver_specific": {} 00:10:32.782 } 00:10:32.782 ] 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.782 "name": "Existed_Raid", 00:10:32.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.782 "strip_size_kb": 64, 00:10:32.782 "state": "configuring", 00:10:32.782 "raid_level": "concat", 00:10:32.782 "superblock": false, 00:10:32.782 "num_base_bdevs": 4, 00:10:32.782 "num_base_bdevs_discovered": 2, 00:10:32.782 "num_base_bdevs_operational": 4, 00:10:32.782 "base_bdevs_list": [ 00:10:32.782 { 00:10:32.782 "name": "BaseBdev1", 00:10:32.782 "uuid": "f8765506-60bf-4040-9feb-8c6b0e21d39e", 00:10:32.782 "is_configured": true, 00:10:32.782 "data_offset": 0, 00:10:32.782 "data_size": 65536 00:10:32.782 }, 00:10:32.782 { 00:10:32.782 "name": "BaseBdev2", 00:10:32.782 "uuid": "1b0437ac-373b-45ce-83fd-1cb088f39870", 00:10:32.782 "is_configured": true, 00:10:32.782 "data_offset": 0, 00:10:32.782 "data_size": 65536 00:10:32.782 }, 00:10:32.782 { 00:10:32.782 "name": "BaseBdev3", 00:10:32.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.782 "is_configured": false, 00:10:32.782 "data_offset": 0, 00:10:32.782 "data_size": 0 00:10:32.782 }, 00:10:32.782 { 00:10:32.782 "name": "BaseBdev4", 00:10:32.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.782 "is_configured": false, 00:10:32.782 "data_offset": 0, 00:10:32.782 "data_size": 0 00:10:32.782 } 00:10:32.782 ] 00:10:32.782 }' 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.782 04:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.352 [2024-11-21 04:08:33.183588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:33.352 BaseBdev3 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.352 [ 00:10:33.352 { 00:10:33.352 "name": "BaseBdev3", 00:10:33.352 "aliases": [ 00:10:33.352 "2378eb23-4c96-4146-8c56-55e38752b990" 00:10:33.352 ], 00:10:33.352 "product_name": "Malloc disk", 00:10:33.352 "block_size": 512, 00:10:33.352 "num_blocks": 65536, 00:10:33.352 "uuid": "2378eb23-4c96-4146-8c56-55e38752b990", 00:10:33.352 "assigned_rate_limits": { 00:10:33.352 "rw_ios_per_sec": 0, 00:10:33.352 "rw_mbytes_per_sec": 0, 00:10:33.352 "r_mbytes_per_sec": 0, 00:10:33.352 "w_mbytes_per_sec": 0 00:10:33.352 }, 00:10:33.352 "claimed": true, 00:10:33.352 "claim_type": "exclusive_write", 00:10:33.352 "zoned": false, 00:10:33.352 "supported_io_types": { 00:10:33.352 "read": true, 00:10:33.352 "write": true, 00:10:33.352 "unmap": true, 00:10:33.352 "flush": true, 00:10:33.352 "reset": true, 00:10:33.352 "nvme_admin": false, 00:10:33.352 "nvme_io": false, 00:10:33.352 "nvme_io_md": false, 00:10:33.352 "write_zeroes": true, 00:10:33.352 "zcopy": true, 00:10:33.352 "get_zone_info": false, 00:10:33.352 "zone_management": false, 00:10:33.352 "zone_append": false, 00:10:33.352 "compare": false, 00:10:33.352 "compare_and_write": false, 00:10:33.352 "abort": true, 00:10:33.352 "seek_hole": false, 00:10:33.352 "seek_data": false, 00:10:33.352 "copy": true, 00:10:33.352 "nvme_iov_md": false 00:10:33.352 }, 00:10:33.352 "memory_domains": [ 00:10:33.352 { 00:10:33.352 "dma_device_id": "system", 00:10:33.352 "dma_device_type": 1 00:10:33.352 }, 00:10:33.352 { 00:10:33.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.352 "dma_device_type": 2 00:10:33.352 } 00:10:33.352 ], 00:10:33.352 "driver_specific": {} 00:10:33.352 } 00:10:33.352 ] 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.352 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.352 "name": "Existed_Raid", 00:10:33.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.352 "strip_size_kb": 64, 00:10:33.352 "state": "configuring", 00:10:33.352 "raid_level": "concat", 00:10:33.352 "superblock": false, 00:10:33.353 "num_base_bdevs": 4, 00:10:33.353 "num_base_bdevs_discovered": 3, 00:10:33.353 "num_base_bdevs_operational": 4, 00:10:33.353 "base_bdevs_list": [ 00:10:33.353 { 00:10:33.353 "name": "BaseBdev1", 00:10:33.353 "uuid": "f8765506-60bf-4040-9feb-8c6b0e21d39e", 00:10:33.353 "is_configured": true, 00:10:33.353 "data_offset": 0, 00:10:33.353 "data_size": 65536 00:10:33.353 }, 00:10:33.353 { 00:10:33.353 "name": "BaseBdev2", 00:10:33.353 "uuid": "1b0437ac-373b-45ce-83fd-1cb088f39870", 00:10:33.353 "is_configured": true, 00:10:33.353 "data_offset": 0, 00:10:33.353 "data_size": 65536 00:10:33.353 }, 00:10:33.353 { 00:10:33.353 "name": "BaseBdev3", 00:10:33.353 "uuid": "2378eb23-4c96-4146-8c56-55e38752b990", 00:10:33.353 "is_configured": true, 00:10:33.353 "data_offset": 0, 00:10:33.353 "data_size": 65536 00:10:33.353 }, 00:10:33.353 { 00:10:33.353 "name": "BaseBdev4", 00:10:33.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.353 "is_configured": false, 00:10:33.353 "data_offset": 0, 00:10:33.353 "data_size": 0 00:10:33.353 } 00:10:33.353 ] 00:10:33.353 }' 00:10:33.353 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.353 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.922 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:33.922 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.922 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.922 [2024-11-21 04:08:33.647976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:33.922 [2024-11-21 04:08:33.648061] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:33.922 [2024-11-21 04:08:33.648071] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:33.922 [2024-11-21 04:08:33.648444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:33.922 [2024-11-21 04:08:33.648604] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:33.922 [2024-11-21 04:08:33.648637] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:33.922 [2024-11-21 04:08:33.648941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.922 BaseBdev4 00:10:33.922 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.922 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:33.922 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:33.922 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:33.922 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:33.922 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:33.922 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:33.922 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:33.922 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.922 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.922 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.922 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:33.922 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.922 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.922 [ 00:10:33.922 { 00:10:33.922 "name": "BaseBdev4", 00:10:33.922 "aliases": [ 00:10:33.922 "8f450ea9-34a8-4bc5-a623-debfb34920d3" 00:10:33.922 ], 00:10:33.922 "product_name": "Malloc disk", 00:10:33.922 "block_size": 512, 00:10:33.922 "num_blocks": 65536, 00:10:33.922 "uuid": "8f450ea9-34a8-4bc5-a623-debfb34920d3", 00:10:33.922 "assigned_rate_limits": { 00:10:33.922 "rw_ios_per_sec": 0, 00:10:33.922 "rw_mbytes_per_sec": 0, 00:10:33.922 "r_mbytes_per_sec": 0, 00:10:33.922 "w_mbytes_per_sec": 0 00:10:33.922 }, 00:10:33.922 "claimed": true, 00:10:33.922 "claim_type": "exclusive_write", 00:10:33.922 "zoned": false, 00:10:33.922 "supported_io_types": { 00:10:33.922 "read": true, 00:10:33.922 "write": true, 00:10:33.922 "unmap": true, 00:10:33.922 "flush": true, 00:10:33.922 "reset": true, 00:10:33.923 "nvme_admin": false, 00:10:33.923 "nvme_io": false, 00:10:33.923 "nvme_io_md": false, 00:10:33.923 "write_zeroes": true, 00:10:33.923 "zcopy": true, 00:10:33.923 "get_zone_info": false, 00:10:33.923 "zone_management": false, 00:10:33.923 "zone_append": false, 00:10:33.923 "compare": false, 00:10:33.923 "compare_and_write": false, 00:10:33.923 "abort": true, 00:10:33.923 "seek_hole": false, 00:10:33.923 "seek_data": false, 00:10:33.923 "copy": true, 00:10:33.923 "nvme_iov_md": false 00:10:33.923 }, 00:10:33.923 "memory_domains": [ 00:10:33.923 { 00:10:33.923 "dma_device_id": "system", 00:10:33.923 "dma_device_type": 1 00:10:33.923 }, 00:10:33.923 { 00:10:33.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.923 "dma_device_type": 2 00:10:33.923 } 00:10:33.923 ], 00:10:33.923 "driver_specific": {} 00:10:33.923 } 00:10:33.923 ] 00:10:33.923 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.923 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:33.923 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:33.923 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:33.923 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:33.923 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.923 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.923 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.923 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.923 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.923 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.923 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.923 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.923 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.923 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.923 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.923 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.923 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.923 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.923 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.923 "name": "Existed_Raid", 00:10:33.923 "uuid": "44eeec66-2753-4564-ad72-f5aca6e28939", 00:10:33.923 "strip_size_kb": 64, 00:10:33.923 "state": "online", 00:10:33.923 "raid_level": "concat", 00:10:33.923 "superblock": false, 00:10:33.923 "num_base_bdevs": 4, 00:10:33.923 "num_base_bdevs_discovered": 4, 00:10:33.923 "num_base_bdevs_operational": 4, 00:10:33.923 "base_bdevs_list": [ 00:10:33.923 { 00:10:33.923 "name": "BaseBdev1", 00:10:33.923 "uuid": "f8765506-60bf-4040-9feb-8c6b0e21d39e", 00:10:33.923 "is_configured": true, 00:10:33.923 "data_offset": 0, 00:10:33.923 "data_size": 65536 00:10:33.923 }, 00:10:33.923 { 00:10:33.923 "name": "BaseBdev2", 00:10:33.923 "uuid": "1b0437ac-373b-45ce-83fd-1cb088f39870", 00:10:33.923 "is_configured": true, 00:10:33.923 "data_offset": 0, 00:10:33.923 "data_size": 65536 00:10:33.923 }, 00:10:33.923 { 00:10:33.923 "name": "BaseBdev3", 00:10:33.923 "uuid": "2378eb23-4c96-4146-8c56-55e38752b990", 00:10:33.923 "is_configured": true, 00:10:33.923 "data_offset": 0, 00:10:33.923 "data_size": 65536 00:10:33.923 }, 00:10:33.923 { 00:10:33.923 "name": "BaseBdev4", 00:10:33.923 "uuid": "8f450ea9-34a8-4bc5-a623-debfb34920d3", 00:10:33.923 "is_configured": true, 00:10:33.923 "data_offset": 0, 00:10:33.923 "data_size": 65536 00:10:33.923 } 00:10:33.923 ] 00:10:33.923 }' 00:10:33.923 04:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.923 04:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.494 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:34.494 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:34.494 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:34.494 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:34.494 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:34.494 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:34.494 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:34.494 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:34.494 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.494 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.494 [2024-11-21 04:08:34.175536] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.494 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.494 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:34.494 "name": "Existed_Raid", 00:10:34.494 "aliases": [ 00:10:34.494 "44eeec66-2753-4564-ad72-f5aca6e28939" 00:10:34.494 ], 00:10:34.494 "product_name": "Raid Volume", 00:10:34.494 "block_size": 512, 00:10:34.494 "num_blocks": 262144, 00:10:34.494 "uuid": "44eeec66-2753-4564-ad72-f5aca6e28939", 00:10:34.494 "assigned_rate_limits": { 00:10:34.494 "rw_ios_per_sec": 0, 00:10:34.494 "rw_mbytes_per_sec": 0, 00:10:34.494 "r_mbytes_per_sec": 0, 00:10:34.494 "w_mbytes_per_sec": 0 00:10:34.494 }, 00:10:34.494 "claimed": false, 00:10:34.494 "zoned": false, 00:10:34.494 "supported_io_types": { 00:10:34.494 "read": true, 00:10:34.494 "write": true, 00:10:34.494 "unmap": true, 00:10:34.494 "flush": true, 00:10:34.494 "reset": true, 00:10:34.494 "nvme_admin": false, 00:10:34.494 "nvme_io": false, 00:10:34.494 "nvme_io_md": false, 00:10:34.494 "write_zeroes": true, 00:10:34.494 "zcopy": false, 00:10:34.494 "get_zone_info": false, 00:10:34.494 "zone_management": false, 00:10:34.494 "zone_append": false, 00:10:34.494 "compare": false, 00:10:34.494 "compare_and_write": false, 00:10:34.494 "abort": false, 00:10:34.494 "seek_hole": false, 00:10:34.494 "seek_data": false, 00:10:34.494 "copy": false, 00:10:34.494 "nvme_iov_md": false 00:10:34.494 }, 00:10:34.494 "memory_domains": [ 00:10:34.494 { 00:10:34.494 "dma_device_id": "system", 00:10:34.494 "dma_device_type": 1 00:10:34.494 }, 00:10:34.494 { 00:10:34.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.494 "dma_device_type": 2 00:10:34.494 }, 00:10:34.494 { 00:10:34.494 "dma_device_id": "system", 00:10:34.494 "dma_device_type": 1 00:10:34.494 }, 00:10:34.494 { 00:10:34.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.494 "dma_device_type": 2 00:10:34.494 }, 00:10:34.494 { 00:10:34.494 "dma_device_id": "system", 00:10:34.494 "dma_device_type": 1 00:10:34.494 }, 00:10:34.494 { 00:10:34.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.494 "dma_device_type": 2 00:10:34.494 }, 00:10:34.494 { 00:10:34.494 "dma_device_id": "system", 00:10:34.494 "dma_device_type": 1 00:10:34.494 }, 00:10:34.494 { 00:10:34.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.494 "dma_device_type": 2 00:10:34.494 } 00:10:34.494 ], 00:10:34.494 "driver_specific": { 00:10:34.494 "raid": { 00:10:34.494 "uuid": "44eeec66-2753-4564-ad72-f5aca6e28939", 00:10:34.494 "strip_size_kb": 64, 00:10:34.494 "state": "online", 00:10:34.494 "raid_level": "concat", 00:10:34.494 "superblock": false, 00:10:34.494 "num_base_bdevs": 4, 00:10:34.494 "num_base_bdevs_discovered": 4, 00:10:34.494 "num_base_bdevs_operational": 4, 00:10:34.494 "base_bdevs_list": [ 00:10:34.494 { 00:10:34.494 "name": "BaseBdev1", 00:10:34.494 "uuid": "f8765506-60bf-4040-9feb-8c6b0e21d39e", 00:10:34.494 "is_configured": true, 00:10:34.494 "data_offset": 0, 00:10:34.494 "data_size": 65536 00:10:34.494 }, 00:10:34.494 { 00:10:34.494 "name": "BaseBdev2", 00:10:34.494 "uuid": "1b0437ac-373b-45ce-83fd-1cb088f39870", 00:10:34.494 "is_configured": true, 00:10:34.494 "data_offset": 0, 00:10:34.494 "data_size": 65536 00:10:34.494 }, 00:10:34.494 { 00:10:34.494 "name": "BaseBdev3", 00:10:34.494 "uuid": "2378eb23-4c96-4146-8c56-55e38752b990", 00:10:34.494 "is_configured": true, 00:10:34.494 "data_offset": 0, 00:10:34.494 "data_size": 65536 00:10:34.494 }, 00:10:34.494 { 00:10:34.494 "name": "BaseBdev4", 00:10:34.494 "uuid": "8f450ea9-34a8-4bc5-a623-debfb34920d3", 00:10:34.494 "is_configured": true, 00:10:34.494 "data_offset": 0, 00:10:34.494 "data_size": 65536 00:10:34.494 } 00:10:34.494 ] 00:10:34.494 } 00:10:34.494 } 00:10:34.494 }' 00:10:34.494 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:34.494 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:34.494 BaseBdev2 00:10:34.494 BaseBdev3 00:10:34.494 BaseBdev4' 00:10:34.494 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.494 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:34.494 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.494 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:34.494 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.494 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.494 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.495 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.495 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.495 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.495 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.495 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.495 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:34.495 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.495 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.495 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.495 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.495 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.495 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.495 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.495 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:34.495 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.495 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.495 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.495 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.495 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.495 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.495 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:34.495 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.495 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.495 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.755 [2024-11-21 04:08:34.482663] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:34.755 [2024-11-21 04:08:34.482699] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:34.755 [2024-11-21 04:08:34.482756] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.755 "name": "Existed_Raid", 00:10:34.755 "uuid": "44eeec66-2753-4564-ad72-f5aca6e28939", 00:10:34.755 "strip_size_kb": 64, 00:10:34.755 "state": "offline", 00:10:34.755 "raid_level": "concat", 00:10:34.755 "superblock": false, 00:10:34.755 "num_base_bdevs": 4, 00:10:34.755 "num_base_bdevs_discovered": 3, 00:10:34.755 "num_base_bdevs_operational": 3, 00:10:34.755 "base_bdevs_list": [ 00:10:34.755 { 00:10:34.755 "name": null, 00:10:34.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.755 "is_configured": false, 00:10:34.755 "data_offset": 0, 00:10:34.755 "data_size": 65536 00:10:34.755 }, 00:10:34.755 { 00:10:34.755 "name": "BaseBdev2", 00:10:34.755 "uuid": "1b0437ac-373b-45ce-83fd-1cb088f39870", 00:10:34.755 "is_configured": true, 00:10:34.755 "data_offset": 0, 00:10:34.755 "data_size": 65536 00:10:34.755 }, 00:10:34.755 { 00:10:34.755 "name": "BaseBdev3", 00:10:34.755 "uuid": "2378eb23-4c96-4146-8c56-55e38752b990", 00:10:34.755 "is_configured": true, 00:10:34.755 "data_offset": 0, 00:10:34.755 "data_size": 65536 00:10:34.755 }, 00:10:34.755 { 00:10:34.755 "name": "BaseBdev4", 00:10:34.755 "uuid": "8f450ea9-34a8-4bc5-a623-debfb34920d3", 00:10:34.755 "is_configured": true, 00:10:34.755 "data_offset": 0, 00:10:34.755 "data_size": 65536 00:10:34.755 } 00:10:34.755 ] 00:10:34.755 }' 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.755 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.015 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:35.015 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.015 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.015 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.015 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.015 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.015 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.015 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:35.015 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:35.015 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:35.015 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.015 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.015 [2024-11-21 04:08:34.950860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:35.015 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.015 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:35.015 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.015 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.015 04:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.015 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.015 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.275 04:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.275 [2024-11-21 04:08:35.027571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.275 [2024-11-21 04:08:35.108472] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:35.275 [2024-11-21 04:08:35.108571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.275 BaseBdev2 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.275 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.275 [ 00:10:35.275 { 00:10:35.275 "name": "BaseBdev2", 00:10:35.275 "aliases": [ 00:10:35.275 "b3965eb5-c821-42f1-a63e-0b04a7e45821" 00:10:35.276 ], 00:10:35.276 "product_name": "Malloc disk", 00:10:35.276 "block_size": 512, 00:10:35.276 "num_blocks": 65536, 00:10:35.276 "uuid": "b3965eb5-c821-42f1-a63e-0b04a7e45821", 00:10:35.276 "assigned_rate_limits": { 00:10:35.276 "rw_ios_per_sec": 0, 00:10:35.276 "rw_mbytes_per_sec": 0, 00:10:35.276 "r_mbytes_per_sec": 0, 00:10:35.276 "w_mbytes_per_sec": 0 00:10:35.276 }, 00:10:35.276 "claimed": false, 00:10:35.276 "zoned": false, 00:10:35.276 "supported_io_types": { 00:10:35.276 "read": true, 00:10:35.276 "write": true, 00:10:35.276 "unmap": true, 00:10:35.276 "flush": true, 00:10:35.276 "reset": true, 00:10:35.276 "nvme_admin": false, 00:10:35.276 "nvme_io": false, 00:10:35.276 "nvme_io_md": false, 00:10:35.276 "write_zeroes": true, 00:10:35.276 "zcopy": true, 00:10:35.276 "get_zone_info": false, 00:10:35.276 "zone_management": false, 00:10:35.276 "zone_append": false, 00:10:35.276 "compare": false, 00:10:35.276 "compare_and_write": false, 00:10:35.276 "abort": true, 00:10:35.276 "seek_hole": false, 00:10:35.276 "seek_data": false, 00:10:35.276 "copy": true, 00:10:35.276 "nvme_iov_md": false 00:10:35.276 }, 00:10:35.276 "memory_domains": [ 00:10:35.276 { 00:10:35.276 "dma_device_id": "system", 00:10:35.276 "dma_device_type": 1 00:10:35.276 }, 00:10:35.276 { 00:10:35.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.276 "dma_device_type": 2 00:10:35.276 } 00:10:35.276 ], 00:10:35.276 "driver_specific": {} 00:10:35.276 } 00:10:35.276 ] 00:10:35.276 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.276 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:35.276 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:35.276 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.276 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:35.276 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.276 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.536 BaseBdev3 00:10:35.536 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.536 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:35.536 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:35.536 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.536 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:35.536 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.536 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.536 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.536 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.536 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.536 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.536 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:35.536 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.536 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.536 [ 00:10:35.536 { 00:10:35.536 "name": "BaseBdev3", 00:10:35.536 "aliases": [ 00:10:35.536 "d22bccc3-e3e7-46ab-831e-2953119e552c" 00:10:35.536 ], 00:10:35.536 "product_name": "Malloc disk", 00:10:35.536 "block_size": 512, 00:10:35.536 "num_blocks": 65536, 00:10:35.536 "uuid": "d22bccc3-e3e7-46ab-831e-2953119e552c", 00:10:35.537 "assigned_rate_limits": { 00:10:35.537 "rw_ios_per_sec": 0, 00:10:35.537 "rw_mbytes_per_sec": 0, 00:10:35.537 "r_mbytes_per_sec": 0, 00:10:35.537 "w_mbytes_per_sec": 0 00:10:35.537 }, 00:10:35.537 "claimed": false, 00:10:35.537 "zoned": false, 00:10:35.537 "supported_io_types": { 00:10:35.537 "read": true, 00:10:35.537 "write": true, 00:10:35.537 "unmap": true, 00:10:35.537 "flush": true, 00:10:35.537 "reset": true, 00:10:35.537 "nvme_admin": false, 00:10:35.537 "nvme_io": false, 00:10:35.537 "nvme_io_md": false, 00:10:35.537 "write_zeroes": true, 00:10:35.537 "zcopy": true, 00:10:35.537 "get_zone_info": false, 00:10:35.537 "zone_management": false, 00:10:35.537 "zone_append": false, 00:10:35.537 "compare": false, 00:10:35.537 "compare_and_write": false, 00:10:35.537 "abort": true, 00:10:35.537 "seek_hole": false, 00:10:35.537 "seek_data": false, 00:10:35.537 "copy": true, 00:10:35.537 "nvme_iov_md": false 00:10:35.537 }, 00:10:35.537 "memory_domains": [ 00:10:35.537 { 00:10:35.537 "dma_device_id": "system", 00:10:35.537 "dma_device_type": 1 00:10:35.537 }, 00:10:35.537 { 00:10:35.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.537 "dma_device_type": 2 00:10:35.537 } 00:10:35.537 ], 00:10:35.537 "driver_specific": {} 00:10:35.537 } 00:10:35.537 ] 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.537 BaseBdev4 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.537 [ 00:10:35.537 { 00:10:35.537 "name": "BaseBdev4", 00:10:35.537 "aliases": [ 00:10:35.537 "8ec74ba5-a331-4a29-9a5a-19db7ed578ea" 00:10:35.537 ], 00:10:35.537 "product_name": "Malloc disk", 00:10:35.537 "block_size": 512, 00:10:35.537 "num_blocks": 65536, 00:10:35.537 "uuid": "8ec74ba5-a331-4a29-9a5a-19db7ed578ea", 00:10:35.537 "assigned_rate_limits": { 00:10:35.537 "rw_ios_per_sec": 0, 00:10:35.537 "rw_mbytes_per_sec": 0, 00:10:35.537 "r_mbytes_per_sec": 0, 00:10:35.537 "w_mbytes_per_sec": 0 00:10:35.537 }, 00:10:35.537 "claimed": false, 00:10:35.537 "zoned": false, 00:10:35.537 "supported_io_types": { 00:10:35.537 "read": true, 00:10:35.537 "write": true, 00:10:35.537 "unmap": true, 00:10:35.537 "flush": true, 00:10:35.537 "reset": true, 00:10:35.537 "nvme_admin": false, 00:10:35.537 "nvme_io": false, 00:10:35.537 "nvme_io_md": false, 00:10:35.537 "write_zeroes": true, 00:10:35.537 "zcopy": true, 00:10:35.537 "get_zone_info": false, 00:10:35.537 "zone_management": false, 00:10:35.537 "zone_append": false, 00:10:35.537 "compare": false, 00:10:35.537 "compare_and_write": false, 00:10:35.537 "abort": true, 00:10:35.537 "seek_hole": false, 00:10:35.537 "seek_data": false, 00:10:35.537 "copy": true, 00:10:35.537 "nvme_iov_md": false 00:10:35.537 }, 00:10:35.537 "memory_domains": [ 00:10:35.537 { 00:10:35.537 "dma_device_id": "system", 00:10:35.537 "dma_device_type": 1 00:10:35.537 }, 00:10:35.537 { 00:10:35.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.537 "dma_device_type": 2 00:10:35.537 } 00:10:35.537 ], 00:10:35.537 "driver_specific": {} 00:10:35.537 } 00:10:35.537 ] 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.537 [2024-11-21 04:08:35.353354] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:35.537 [2024-11-21 04:08:35.353475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:35.537 [2024-11-21 04:08:35.353536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:35.537 [2024-11-21 04:08:35.355600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:35.537 [2024-11-21 04:08:35.355686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.537 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.537 "name": "Existed_Raid", 00:10:35.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.537 "strip_size_kb": 64, 00:10:35.537 "state": "configuring", 00:10:35.537 "raid_level": "concat", 00:10:35.537 "superblock": false, 00:10:35.537 "num_base_bdevs": 4, 00:10:35.537 "num_base_bdevs_discovered": 3, 00:10:35.537 "num_base_bdevs_operational": 4, 00:10:35.537 "base_bdevs_list": [ 00:10:35.538 { 00:10:35.538 "name": "BaseBdev1", 00:10:35.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.538 "is_configured": false, 00:10:35.538 "data_offset": 0, 00:10:35.538 "data_size": 0 00:10:35.538 }, 00:10:35.538 { 00:10:35.538 "name": "BaseBdev2", 00:10:35.538 "uuid": "b3965eb5-c821-42f1-a63e-0b04a7e45821", 00:10:35.538 "is_configured": true, 00:10:35.538 "data_offset": 0, 00:10:35.538 "data_size": 65536 00:10:35.538 }, 00:10:35.538 { 00:10:35.538 "name": "BaseBdev3", 00:10:35.538 "uuid": "d22bccc3-e3e7-46ab-831e-2953119e552c", 00:10:35.538 "is_configured": true, 00:10:35.538 "data_offset": 0, 00:10:35.538 "data_size": 65536 00:10:35.538 }, 00:10:35.538 { 00:10:35.538 "name": "BaseBdev4", 00:10:35.538 "uuid": "8ec74ba5-a331-4a29-9a5a-19db7ed578ea", 00:10:35.538 "is_configured": true, 00:10:35.538 "data_offset": 0, 00:10:35.538 "data_size": 65536 00:10:35.538 } 00:10:35.538 ] 00:10:35.538 }' 00:10:35.538 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.538 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.107 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:36.107 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.107 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.107 [2024-11-21 04:08:35.836535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:36.107 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.107 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:36.107 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.107 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.107 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.107 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.107 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.107 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.107 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.107 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.107 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.107 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.107 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.107 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.107 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.107 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.107 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.107 "name": "Existed_Raid", 00:10:36.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.107 "strip_size_kb": 64, 00:10:36.107 "state": "configuring", 00:10:36.107 "raid_level": "concat", 00:10:36.107 "superblock": false, 00:10:36.107 "num_base_bdevs": 4, 00:10:36.107 "num_base_bdevs_discovered": 2, 00:10:36.107 "num_base_bdevs_operational": 4, 00:10:36.107 "base_bdevs_list": [ 00:10:36.107 { 00:10:36.107 "name": "BaseBdev1", 00:10:36.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.107 "is_configured": false, 00:10:36.107 "data_offset": 0, 00:10:36.107 "data_size": 0 00:10:36.107 }, 00:10:36.107 { 00:10:36.107 "name": null, 00:10:36.107 "uuid": "b3965eb5-c821-42f1-a63e-0b04a7e45821", 00:10:36.107 "is_configured": false, 00:10:36.107 "data_offset": 0, 00:10:36.107 "data_size": 65536 00:10:36.107 }, 00:10:36.107 { 00:10:36.107 "name": "BaseBdev3", 00:10:36.107 "uuid": "d22bccc3-e3e7-46ab-831e-2953119e552c", 00:10:36.107 "is_configured": true, 00:10:36.107 "data_offset": 0, 00:10:36.107 "data_size": 65536 00:10:36.107 }, 00:10:36.107 { 00:10:36.107 "name": "BaseBdev4", 00:10:36.107 "uuid": "8ec74ba5-a331-4a29-9a5a-19db7ed578ea", 00:10:36.107 "is_configured": true, 00:10:36.107 "data_offset": 0, 00:10:36.107 "data_size": 65536 00:10:36.107 } 00:10:36.107 ] 00:10:36.107 }' 00:10:36.107 04:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.107 04:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.367 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.367 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.367 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.367 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:36.367 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.367 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:36.367 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:36.367 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.367 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.626 [2024-11-21 04:08:36.352571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.626 BaseBdev1 00:10:36.626 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.626 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:36.626 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:36.626 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.626 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:36.626 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.626 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.626 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.626 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.626 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.626 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.626 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:36.626 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.626 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.626 [ 00:10:36.626 { 00:10:36.626 "name": "BaseBdev1", 00:10:36.626 "aliases": [ 00:10:36.626 "7b1bb5c2-7a36-461b-a539-0796f055dc2d" 00:10:36.626 ], 00:10:36.626 "product_name": "Malloc disk", 00:10:36.626 "block_size": 512, 00:10:36.626 "num_blocks": 65536, 00:10:36.626 "uuid": "7b1bb5c2-7a36-461b-a539-0796f055dc2d", 00:10:36.626 "assigned_rate_limits": { 00:10:36.626 "rw_ios_per_sec": 0, 00:10:36.626 "rw_mbytes_per_sec": 0, 00:10:36.626 "r_mbytes_per_sec": 0, 00:10:36.626 "w_mbytes_per_sec": 0 00:10:36.626 }, 00:10:36.626 "claimed": true, 00:10:36.626 "claim_type": "exclusive_write", 00:10:36.626 "zoned": false, 00:10:36.627 "supported_io_types": { 00:10:36.627 "read": true, 00:10:36.627 "write": true, 00:10:36.627 "unmap": true, 00:10:36.627 "flush": true, 00:10:36.627 "reset": true, 00:10:36.627 "nvme_admin": false, 00:10:36.627 "nvme_io": false, 00:10:36.627 "nvme_io_md": false, 00:10:36.627 "write_zeroes": true, 00:10:36.627 "zcopy": true, 00:10:36.627 "get_zone_info": false, 00:10:36.627 "zone_management": false, 00:10:36.627 "zone_append": false, 00:10:36.627 "compare": false, 00:10:36.627 "compare_and_write": false, 00:10:36.627 "abort": true, 00:10:36.627 "seek_hole": false, 00:10:36.627 "seek_data": false, 00:10:36.627 "copy": true, 00:10:36.627 "nvme_iov_md": false 00:10:36.627 }, 00:10:36.627 "memory_domains": [ 00:10:36.627 { 00:10:36.627 "dma_device_id": "system", 00:10:36.627 "dma_device_type": 1 00:10:36.627 }, 00:10:36.627 { 00:10:36.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.627 "dma_device_type": 2 00:10:36.627 } 00:10:36.627 ], 00:10:36.627 "driver_specific": {} 00:10:36.627 } 00:10:36.627 ] 00:10:36.627 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.627 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:36.627 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:36.627 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.627 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.627 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.627 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.627 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.627 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.627 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.627 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.627 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.627 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.627 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.627 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.627 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.627 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.627 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.627 "name": "Existed_Raid", 00:10:36.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.627 "strip_size_kb": 64, 00:10:36.627 "state": "configuring", 00:10:36.627 "raid_level": "concat", 00:10:36.627 "superblock": false, 00:10:36.627 "num_base_bdevs": 4, 00:10:36.627 "num_base_bdevs_discovered": 3, 00:10:36.627 "num_base_bdevs_operational": 4, 00:10:36.627 "base_bdevs_list": [ 00:10:36.627 { 00:10:36.627 "name": "BaseBdev1", 00:10:36.627 "uuid": "7b1bb5c2-7a36-461b-a539-0796f055dc2d", 00:10:36.627 "is_configured": true, 00:10:36.627 "data_offset": 0, 00:10:36.627 "data_size": 65536 00:10:36.627 }, 00:10:36.627 { 00:10:36.627 "name": null, 00:10:36.627 "uuid": "b3965eb5-c821-42f1-a63e-0b04a7e45821", 00:10:36.627 "is_configured": false, 00:10:36.627 "data_offset": 0, 00:10:36.627 "data_size": 65536 00:10:36.627 }, 00:10:36.627 { 00:10:36.627 "name": "BaseBdev3", 00:10:36.627 "uuid": "d22bccc3-e3e7-46ab-831e-2953119e552c", 00:10:36.627 "is_configured": true, 00:10:36.627 "data_offset": 0, 00:10:36.627 "data_size": 65536 00:10:36.627 }, 00:10:36.627 { 00:10:36.627 "name": "BaseBdev4", 00:10:36.627 "uuid": "8ec74ba5-a331-4a29-9a5a-19db7ed578ea", 00:10:36.627 "is_configured": true, 00:10:36.627 "data_offset": 0, 00:10:36.627 "data_size": 65536 00:10:36.627 } 00:10:36.627 ] 00:10:36.627 }' 00:10:36.627 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.627 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.899 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:36.899 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.899 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.899 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.899 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.899 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:36.899 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:36.899 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.899 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.174 [2024-11-21 04:08:36.863780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:37.174 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.174 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:37.174 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.174 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.174 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.174 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.174 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.174 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.174 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.174 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.174 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.174 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.174 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.174 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.174 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.174 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.174 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.174 "name": "Existed_Raid", 00:10:37.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.174 "strip_size_kb": 64, 00:10:37.174 "state": "configuring", 00:10:37.174 "raid_level": "concat", 00:10:37.174 "superblock": false, 00:10:37.174 "num_base_bdevs": 4, 00:10:37.174 "num_base_bdevs_discovered": 2, 00:10:37.174 "num_base_bdevs_operational": 4, 00:10:37.174 "base_bdevs_list": [ 00:10:37.174 { 00:10:37.174 "name": "BaseBdev1", 00:10:37.174 "uuid": "7b1bb5c2-7a36-461b-a539-0796f055dc2d", 00:10:37.174 "is_configured": true, 00:10:37.174 "data_offset": 0, 00:10:37.174 "data_size": 65536 00:10:37.174 }, 00:10:37.174 { 00:10:37.174 "name": null, 00:10:37.174 "uuid": "b3965eb5-c821-42f1-a63e-0b04a7e45821", 00:10:37.174 "is_configured": false, 00:10:37.174 "data_offset": 0, 00:10:37.174 "data_size": 65536 00:10:37.174 }, 00:10:37.174 { 00:10:37.174 "name": null, 00:10:37.174 "uuid": "d22bccc3-e3e7-46ab-831e-2953119e552c", 00:10:37.174 "is_configured": false, 00:10:37.174 "data_offset": 0, 00:10:37.174 "data_size": 65536 00:10:37.174 }, 00:10:37.174 { 00:10:37.174 "name": "BaseBdev4", 00:10:37.174 "uuid": "8ec74ba5-a331-4a29-9a5a-19db7ed578ea", 00:10:37.174 "is_configured": true, 00:10:37.174 "data_offset": 0, 00:10:37.174 "data_size": 65536 00:10:37.174 } 00:10:37.174 ] 00:10:37.174 }' 00:10:37.174 04:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.174 04:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.434 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:37.434 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.434 04:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.434 04:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.434 04:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.434 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:37.434 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:37.434 04:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.434 04:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.434 [2024-11-21 04:08:37.366956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:37.434 04:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.434 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:37.434 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.434 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.434 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.434 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.434 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.434 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.434 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.434 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.434 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.434 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.434 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.434 04:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.434 04:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.434 04:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.693 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.693 "name": "Existed_Raid", 00:10:37.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.693 "strip_size_kb": 64, 00:10:37.693 "state": "configuring", 00:10:37.693 "raid_level": "concat", 00:10:37.693 "superblock": false, 00:10:37.693 "num_base_bdevs": 4, 00:10:37.693 "num_base_bdevs_discovered": 3, 00:10:37.693 "num_base_bdevs_operational": 4, 00:10:37.693 "base_bdevs_list": [ 00:10:37.693 { 00:10:37.693 "name": "BaseBdev1", 00:10:37.693 "uuid": "7b1bb5c2-7a36-461b-a539-0796f055dc2d", 00:10:37.693 "is_configured": true, 00:10:37.693 "data_offset": 0, 00:10:37.693 "data_size": 65536 00:10:37.693 }, 00:10:37.693 { 00:10:37.693 "name": null, 00:10:37.693 "uuid": "b3965eb5-c821-42f1-a63e-0b04a7e45821", 00:10:37.693 "is_configured": false, 00:10:37.693 "data_offset": 0, 00:10:37.693 "data_size": 65536 00:10:37.693 }, 00:10:37.693 { 00:10:37.693 "name": "BaseBdev3", 00:10:37.693 "uuid": "d22bccc3-e3e7-46ab-831e-2953119e552c", 00:10:37.693 "is_configured": true, 00:10:37.693 "data_offset": 0, 00:10:37.693 "data_size": 65536 00:10:37.693 }, 00:10:37.693 { 00:10:37.693 "name": "BaseBdev4", 00:10:37.693 "uuid": "8ec74ba5-a331-4a29-9a5a-19db7ed578ea", 00:10:37.693 "is_configured": true, 00:10:37.693 "data_offset": 0, 00:10:37.693 "data_size": 65536 00:10:37.693 } 00:10:37.693 ] 00:10:37.693 }' 00:10:37.693 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.693 04:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.953 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.953 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:37.953 04:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.953 04:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.953 04:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.953 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:37.953 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:37.953 04:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.953 04:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.953 [2024-11-21 04:08:37.830291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:37.953 04:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.953 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:37.953 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.953 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.953 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.953 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.953 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.953 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.953 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.953 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.953 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.953 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.953 04:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.954 04:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.954 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.954 04:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.954 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.954 "name": "Existed_Raid", 00:10:37.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.954 "strip_size_kb": 64, 00:10:37.954 "state": "configuring", 00:10:37.954 "raid_level": "concat", 00:10:37.954 "superblock": false, 00:10:37.954 "num_base_bdevs": 4, 00:10:37.954 "num_base_bdevs_discovered": 2, 00:10:37.954 "num_base_bdevs_operational": 4, 00:10:37.954 "base_bdevs_list": [ 00:10:37.954 { 00:10:37.954 "name": null, 00:10:37.954 "uuid": "7b1bb5c2-7a36-461b-a539-0796f055dc2d", 00:10:37.954 "is_configured": false, 00:10:37.954 "data_offset": 0, 00:10:37.954 "data_size": 65536 00:10:37.954 }, 00:10:37.954 { 00:10:37.954 "name": null, 00:10:37.954 "uuid": "b3965eb5-c821-42f1-a63e-0b04a7e45821", 00:10:37.954 "is_configured": false, 00:10:37.954 "data_offset": 0, 00:10:37.954 "data_size": 65536 00:10:37.954 }, 00:10:37.954 { 00:10:37.954 "name": "BaseBdev3", 00:10:37.954 "uuid": "d22bccc3-e3e7-46ab-831e-2953119e552c", 00:10:37.954 "is_configured": true, 00:10:37.954 "data_offset": 0, 00:10:37.954 "data_size": 65536 00:10:37.954 }, 00:10:37.954 { 00:10:37.954 "name": "BaseBdev4", 00:10:37.954 "uuid": "8ec74ba5-a331-4a29-9a5a-19db7ed578ea", 00:10:37.954 "is_configured": true, 00:10:37.954 "data_offset": 0, 00:10:37.954 "data_size": 65536 00:10:37.954 } 00:10:37.954 ] 00:10:37.954 }' 00:10:37.954 04:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.954 04:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.522 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.522 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:38.522 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.522 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.522 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.523 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:38.523 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:38.523 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.523 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.523 [2024-11-21 04:08:38.337380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:38.523 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.523 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:38.523 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.523 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.523 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.523 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.523 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.523 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.523 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.523 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.523 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.523 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.523 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.523 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.523 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.523 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.523 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.523 "name": "Existed_Raid", 00:10:38.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.523 "strip_size_kb": 64, 00:10:38.523 "state": "configuring", 00:10:38.523 "raid_level": "concat", 00:10:38.523 "superblock": false, 00:10:38.523 "num_base_bdevs": 4, 00:10:38.523 "num_base_bdevs_discovered": 3, 00:10:38.523 "num_base_bdevs_operational": 4, 00:10:38.523 "base_bdevs_list": [ 00:10:38.523 { 00:10:38.523 "name": null, 00:10:38.523 "uuid": "7b1bb5c2-7a36-461b-a539-0796f055dc2d", 00:10:38.523 "is_configured": false, 00:10:38.523 "data_offset": 0, 00:10:38.523 "data_size": 65536 00:10:38.523 }, 00:10:38.523 { 00:10:38.523 "name": "BaseBdev2", 00:10:38.523 "uuid": "b3965eb5-c821-42f1-a63e-0b04a7e45821", 00:10:38.523 "is_configured": true, 00:10:38.523 "data_offset": 0, 00:10:38.523 "data_size": 65536 00:10:38.523 }, 00:10:38.523 { 00:10:38.523 "name": "BaseBdev3", 00:10:38.523 "uuid": "d22bccc3-e3e7-46ab-831e-2953119e552c", 00:10:38.523 "is_configured": true, 00:10:38.523 "data_offset": 0, 00:10:38.523 "data_size": 65536 00:10:38.523 }, 00:10:38.523 { 00:10:38.523 "name": "BaseBdev4", 00:10:38.523 "uuid": "8ec74ba5-a331-4a29-9a5a-19db7ed578ea", 00:10:38.523 "is_configured": true, 00:10:38.523 "data_offset": 0, 00:10:38.523 "data_size": 65536 00:10:38.523 } 00:10:38.523 ] 00:10:38.523 }' 00:10:38.523 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.523 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7b1bb5c2-7a36-461b-a539-0796f055dc2d 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.093 [2024-11-21 04:08:38.937569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:39.093 [2024-11-21 04:08:38.937729] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:39.093 [2024-11-21 04:08:38.937754] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:39.093 [2024-11-21 04:08:38.938112] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:39.093 [2024-11-21 04:08:38.938298] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:39.093 [2024-11-21 04:08:38.938341] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:39.093 [2024-11-21 04:08:38.938622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.093 NewBaseBdev 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.093 [ 00:10:39.093 { 00:10:39.093 "name": "NewBaseBdev", 00:10:39.093 "aliases": [ 00:10:39.093 "7b1bb5c2-7a36-461b-a539-0796f055dc2d" 00:10:39.093 ], 00:10:39.093 "product_name": "Malloc disk", 00:10:39.093 "block_size": 512, 00:10:39.093 "num_blocks": 65536, 00:10:39.093 "uuid": "7b1bb5c2-7a36-461b-a539-0796f055dc2d", 00:10:39.093 "assigned_rate_limits": { 00:10:39.093 "rw_ios_per_sec": 0, 00:10:39.093 "rw_mbytes_per_sec": 0, 00:10:39.093 "r_mbytes_per_sec": 0, 00:10:39.093 "w_mbytes_per_sec": 0 00:10:39.093 }, 00:10:39.093 "claimed": true, 00:10:39.093 "claim_type": "exclusive_write", 00:10:39.093 "zoned": false, 00:10:39.093 "supported_io_types": { 00:10:39.093 "read": true, 00:10:39.093 "write": true, 00:10:39.093 "unmap": true, 00:10:39.093 "flush": true, 00:10:39.093 "reset": true, 00:10:39.093 "nvme_admin": false, 00:10:39.093 "nvme_io": false, 00:10:39.093 "nvme_io_md": false, 00:10:39.093 "write_zeroes": true, 00:10:39.093 "zcopy": true, 00:10:39.093 "get_zone_info": false, 00:10:39.093 "zone_management": false, 00:10:39.093 "zone_append": false, 00:10:39.093 "compare": false, 00:10:39.093 "compare_and_write": false, 00:10:39.093 "abort": true, 00:10:39.093 "seek_hole": false, 00:10:39.093 "seek_data": false, 00:10:39.093 "copy": true, 00:10:39.093 "nvme_iov_md": false 00:10:39.093 }, 00:10:39.093 "memory_domains": [ 00:10:39.093 { 00:10:39.093 "dma_device_id": "system", 00:10:39.093 "dma_device_type": 1 00:10:39.093 }, 00:10:39.093 { 00:10:39.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.093 "dma_device_type": 2 00:10:39.093 } 00:10:39.093 ], 00:10:39.093 "driver_specific": {} 00:10:39.093 } 00:10:39.093 ] 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.093 04:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.093 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.093 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.093 "name": "Existed_Raid", 00:10:39.093 "uuid": "73c76f93-85ab-42e8-a82f-91bfae65816d", 00:10:39.093 "strip_size_kb": 64, 00:10:39.093 "state": "online", 00:10:39.093 "raid_level": "concat", 00:10:39.093 "superblock": false, 00:10:39.093 "num_base_bdevs": 4, 00:10:39.093 "num_base_bdevs_discovered": 4, 00:10:39.093 "num_base_bdevs_operational": 4, 00:10:39.093 "base_bdevs_list": [ 00:10:39.093 { 00:10:39.093 "name": "NewBaseBdev", 00:10:39.093 "uuid": "7b1bb5c2-7a36-461b-a539-0796f055dc2d", 00:10:39.093 "is_configured": true, 00:10:39.093 "data_offset": 0, 00:10:39.093 "data_size": 65536 00:10:39.093 }, 00:10:39.093 { 00:10:39.093 "name": "BaseBdev2", 00:10:39.093 "uuid": "b3965eb5-c821-42f1-a63e-0b04a7e45821", 00:10:39.093 "is_configured": true, 00:10:39.093 "data_offset": 0, 00:10:39.093 "data_size": 65536 00:10:39.093 }, 00:10:39.093 { 00:10:39.093 "name": "BaseBdev3", 00:10:39.093 "uuid": "d22bccc3-e3e7-46ab-831e-2953119e552c", 00:10:39.093 "is_configured": true, 00:10:39.093 "data_offset": 0, 00:10:39.093 "data_size": 65536 00:10:39.093 }, 00:10:39.093 { 00:10:39.093 "name": "BaseBdev4", 00:10:39.093 "uuid": "8ec74ba5-a331-4a29-9a5a-19db7ed578ea", 00:10:39.093 "is_configured": true, 00:10:39.093 "data_offset": 0, 00:10:39.093 "data_size": 65536 00:10:39.093 } 00:10:39.093 ] 00:10:39.093 }' 00:10:39.093 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.093 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.670 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:39.670 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:39.670 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:39.670 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:39.670 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:39.670 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:39.670 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:39.670 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:39.670 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.670 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.670 [2024-11-21 04:08:39.461109] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.670 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.670 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:39.670 "name": "Existed_Raid", 00:10:39.670 "aliases": [ 00:10:39.670 "73c76f93-85ab-42e8-a82f-91bfae65816d" 00:10:39.670 ], 00:10:39.670 "product_name": "Raid Volume", 00:10:39.670 "block_size": 512, 00:10:39.670 "num_blocks": 262144, 00:10:39.670 "uuid": "73c76f93-85ab-42e8-a82f-91bfae65816d", 00:10:39.670 "assigned_rate_limits": { 00:10:39.670 "rw_ios_per_sec": 0, 00:10:39.670 "rw_mbytes_per_sec": 0, 00:10:39.670 "r_mbytes_per_sec": 0, 00:10:39.670 "w_mbytes_per_sec": 0 00:10:39.670 }, 00:10:39.670 "claimed": false, 00:10:39.670 "zoned": false, 00:10:39.670 "supported_io_types": { 00:10:39.670 "read": true, 00:10:39.670 "write": true, 00:10:39.670 "unmap": true, 00:10:39.670 "flush": true, 00:10:39.670 "reset": true, 00:10:39.670 "nvme_admin": false, 00:10:39.670 "nvme_io": false, 00:10:39.670 "nvme_io_md": false, 00:10:39.670 "write_zeroes": true, 00:10:39.670 "zcopy": false, 00:10:39.670 "get_zone_info": false, 00:10:39.670 "zone_management": false, 00:10:39.671 "zone_append": false, 00:10:39.671 "compare": false, 00:10:39.671 "compare_and_write": false, 00:10:39.671 "abort": false, 00:10:39.671 "seek_hole": false, 00:10:39.671 "seek_data": false, 00:10:39.671 "copy": false, 00:10:39.671 "nvme_iov_md": false 00:10:39.671 }, 00:10:39.671 "memory_domains": [ 00:10:39.671 { 00:10:39.671 "dma_device_id": "system", 00:10:39.671 "dma_device_type": 1 00:10:39.671 }, 00:10:39.671 { 00:10:39.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.671 "dma_device_type": 2 00:10:39.671 }, 00:10:39.671 { 00:10:39.671 "dma_device_id": "system", 00:10:39.671 "dma_device_type": 1 00:10:39.671 }, 00:10:39.671 { 00:10:39.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.671 "dma_device_type": 2 00:10:39.671 }, 00:10:39.671 { 00:10:39.671 "dma_device_id": "system", 00:10:39.671 "dma_device_type": 1 00:10:39.671 }, 00:10:39.671 { 00:10:39.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.671 "dma_device_type": 2 00:10:39.671 }, 00:10:39.671 { 00:10:39.671 "dma_device_id": "system", 00:10:39.671 "dma_device_type": 1 00:10:39.671 }, 00:10:39.671 { 00:10:39.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.671 "dma_device_type": 2 00:10:39.671 } 00:10:39.671 ], 00:10:39.671 "driver_specific": { 00:10:39.671 "raid": { 00:10:39.671 "uuid": "73c76f93-85ab-42e8-a82f-91bfae65816d", 00:10:39.671 "strip_size_kb": 64, 00:10:39.671 "state": "online", 00:10:39.671 "raid_level": "concat", 00:10:39.671 "superblock": false, 00:10:39.671 "num_base_bdevs": 4, 00:10:39.671 "num_base_bdevs_discovered": 4, 00:10:39.671 "num_base_bdevs_operational": 4, 00:10:39.671 "base_bdevs_list": [ 00:10:39.671 { 00:10:39.671 "name": "NewBaseBdev", 00:10:39.671 "uuid": "7b1bb5c2-7a36-461b-a539-0796f055dc2d", 00:10:39.671 "is_configured": true, 00:10:39.671 "data_offset": 0, 00:10:39.671 "data_size": 65536 00:10:39.671 }, 00:10:39.671 { 00:10:39.671 "name": "BaseBdev2", 00:10:39.671 "uuid": "b3965eb5-c821-42f1-a63e-0b04a7e45821", 00:10:39.671 "is_configured": true, 00:10:39.671 "data_offset": 0, 00:10:39.671 "data_size": 65536 00:10:39.671 }, 00:10:39.671 { 00:10:39.671 "name": "BaseBdev3", 00:10:39.671 "uuid": "d22bccc3-e3e7-46ab-831e-2953119e552c", 00:10:39.671 "is_configured": true, 00:10:39.671 "data_offset": 0, 00:10:39.671 "data_size": 65536 00:10:39.671 }, 00:10:39.671 { 00:10:39.671 "name": "BaseBdev4", 00:10:39.671 "uuid": "8ec74ba5-a331-4a29-9a5a-19db7ed578ea", 00:10:39.671 "is_configured": true, 00:10:39.671 "data_offset": 0, 00:10:39.671 "data_size": 65536 00:10:39.671 } 00:10:39.671 ] 00:10:39.671 } 00:10:39.671 } 00:10:39.671 }' 00:10:39.671 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:39.671 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:39.671 BaseBdev2 00:10:39.671 BaseBdev3 00:10:39.671 BaseBdev4' 00:10:39.671 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.671 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:39.671 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.671 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:39.671 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.671 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.671 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.671 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.931 [2024-11-21 04:08:39.792295] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:39.931 [2024-11-21 04:08:39.792340] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:39.931 [2024-11-21 04:08:39.792449] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.931 [2024-11-21 04:08:39.792531] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:39.931 [2024-11-21 04:08:39.792543] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82188 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82188 ']' 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82188 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:39.931 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82188 00:10:39.932 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:39.932 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:39.932 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82188' 00:10:39.932 killing process with pid 82188 00:10:39.932 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 82188 00:10:39.932 [2024-11-21 04:08:39.845322] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:39.932 04:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 82188 00:10:40.191 [2024-11-21 04:08:39.926517] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:40.451 00:10:40.451 real 0m9.757s 00:10:40.451 user 0m16.333s 00:10:40.451 sys 0m2.246s 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.451 ************************************ 00:10:40.451 END TEST raid_state_function_test 00:10:40.451 ************************************ 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.451 04:08:40 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:40.451 04:08:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:40.451 04:08:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.451 04:08:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:40.451 ************************************ 00:10:40.451 START TEST raid_state_function_test_sb 00:10:40.451 ************************************ 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82847 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:40.451 Process raid pid: 82847 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82847' 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82847 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82847 ']' 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.451 04:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.711 [2024-11-21 04:08:40.424397] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:10:40.711 [2024-11-21 04:08:40.424608] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.711 [2024-11-21 04:08:40.580853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.711 [2024-11-21 04:08:40.622479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.971 [2024-11-21 04:08:40.698490] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.971 [2024-11-21 04:08:40.698531] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.541 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:41.541 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:41.541 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:41.541 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.541 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.541 [2024-11-21 04:08:41.261886] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.541 [2024-11-21 04:08:41.262051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.541 [2024-11-21 04:08:41.262065] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:41.541 [2024-11-21 04:08:41.262076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:41.541 [2024-11-21 04:08:41.262083] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:41.541 [2024-11-21 04:08:41.262112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:41.541 [2024-11-21 04:08:41.262118] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:41.541 [2024-11-21 04:08:41.262139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:41.541 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.541 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:41.541 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.541 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.541 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.541 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.541 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.541 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.541 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.541 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.541 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.541 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.541 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.541 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.541 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.541 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.541 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.541 "name": "Existed_Raid", 00:10:41.541 "uuid": "7ecc4ce6-0dfc-46a4-87cb-f6da17cf7d4b", 00:10:41.541 "strip_size_kb": 64, 00:10:41.541 "state": "configuring", 00:10:41.541 "raid_level": "concat", 00:10:41.541 "superblock": true, 00:10:41.541 "num_base_bdevs": 4, 00:10:41.541 "num_base_bdevs_discovered": 0, 00:10:41.541 "num_base_bdevs_operational": 4, 00:10:41.541 "base_bdevs_list": [ 00:10:41.541 { 00:10:41.541 "name": "BaseBdev1", 00:10:41.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.541 "is_configured": false, 00:10:41.541 "data_offset": 0, 00:10:41.541 "data_size": 0 00:10:41.541 }, 00:10:41.541 { 00:10:41.541 "name": "BaseBdev2", 00:10:41.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.541 "is_configured": false, 00:10:41.541 "data_offset": 0, 00:10:41.541 "data_size": 0 00:10:41.541 }, 00:10:41.541 { 00:10:41.541 "name": "BaseBdev3", 00:10:41.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.541 "is_configured": false, 00:10:41.541 "data_offset": 0, 00:10:41.541 "data_size": 0 00:10:41.541 }, 00:10:41.541 { 00:10:41.541 "name": "BaseBdev4", 00:10:41.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.541 "is_configured": false, 00:10:41.541 "data_offset": 0, 00:10:41.541 "data_size": 0 00:10:41.541 } 00:10:41.541 ] 00:10:41.541 }' 00:10:41.541 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.541 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.801 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:41.801 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.801 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.801 [2024-11-21 04:08:41.709032] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:41.801 [2024-11-21 04:08:41.709168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:41.801 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.801 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:41.801 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.801 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.801 [2024-11-21 04:08:41.717038] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.801 [2024-11-21 04:08:41.717140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.801 [2024-11-21 04:08:41.717166] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:41.802 [2024-11-21 04:08:41.717189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:41.802 [2024-11-21 04:08:41.717206] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:41.802 [2024-11-21 04:08:41.717236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:41.802 [2024-11-21 04:08:41.717254] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:41.802 [2024-11-21 04:08:41.717275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:41.802 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.802 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:41.802 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.802 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.802 [2024-11-21 04:08:41.740102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:41.802 BaseBdev1 00:10:41.802 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.802 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:41.802 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:41.802 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.802 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:41.802 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.802 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.802 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.802 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.802 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.802 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.802 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:41.802 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.802 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.802 [ 00:10:41.802 { 00:10:41.802 "name": "BaseBdev1", 00:10:41.802 "aliases": [ 00:10:41.802 "c8fe8930-3e23-4180-b54e-dceccad9091a" 00:10:41.802 ], 00:10:41.802 "product_name": "Malloc disk", 00:10:41.802 "block_size": 512, 00:10:41.802 "num_blocks": 65536, 00:10:41.802 "uuid": "c8fe8930-3e23-4180-b54e-dceccad9091a", 00:10:41.802 "assigned_rate_limits": { 00:10:41.802 "rw_ios_per_sec": 0, 00:10:41.802 "rw_mbytes_per_sec": 0, 00:10:41.802 "r_mbytes_per_sec": 0, 00:10:41.802 "w_mbytes_per_sec": 0 00:10:41.802 }, 00:10:41.802 "claimed": true, 00:10:41.802 "claim_type": "exclusive_write", 00:10:41.802 "zoned": false, 00:10:41.802 "supported_io_types": { 00:10:41.802 "read": true, 00:10:41.802 "write": true, 00:10:41.802 "unmap": true, 00:10:41.802 "flush": true, 00:10:42.062 "reset": true, 00:10:42.062 "nvme_admin": false, 00:10:42.062 "nvme_io": false, 00:10:42.062 "nvme_io_md": false, 00:10:42.062 "write_zeroes": true, 00:10:42.062 "zcopy": true, 00:10:42.062 "get_zone_info": false, 00:10:42.062 "zone_management": false, 00:10:42.062 "zone_append": false, 00:10:42.062 "compare": false, 00:10:42.062 "compare_and_write": false, 00:10:42.062 "abort": true, 00:10:42.062 "seek_hole": false, 00:10:42.062 "seek_data": false, 00:10:42.062 "copy": true, 00:10:42.062 "nvme_iov_md": false 00:10:42.062 }, 00:10:42.062 "memory_domains": [ 00:10:42.062 { 00:10:42.062 "dma_device_id": "system", 00:10:42.062 "dma_device_type": 1 00:10:42.062 }, 00:10:42.062 { 00:10:42.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.062 "dma_device_type": 2 00:10:42.062 } 00:10:42.062 ], 00:10:42.062 "driver_specific": {} 00:10:42.062 } 00:10:42.062 ] 00:10:42.062 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.062 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:42.062 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:42.062 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.062 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.062 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.062 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.062 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.062 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.062 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.062 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.062 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.062 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.062 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.062 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.062 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.062 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.062 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.062 "name": "Existed_Raid", 00:10:42.062 "uuid": "14ae478d-5aa7-40d8-8a5b-8b300fa4115e", 00:10:42.062 "strip_size_kb": 64, 00:10:42.062 "state": "configuring", 00:10:42.062 "raid_level": "concat", 00:10:42.062 "superblock": true, 00:10:42.062 "num_base_bdevs": 4, 00:10:42.062 "num_base_bdevs_discovered": 1, 00:10:42.062 "num_base_bdevs_operational": 4, 00:10:42.062 "base_bdevs_list": [ 00:10:42.062 { 00:10:42.062 "name": "BaseBdev1", 00:10:42.062 "uuid": "c8fe8930-3e23-4180-b54e-dceccad9091a", 00:10:42.062 "is_configured": true, 00:10:42.062 "data_offset": 2048, 00:10:42.062 "data_size": 63488 00:10:42.062 }, 00:10:42.062 { 00:10:42.062 "name": "BaseBdev2", 00:10:42.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.062 "is_configured": false, 00:10:42.062 "data_offset": 0, 00:10:42.062 "data_size": 0 00:10:42.062 }, 00:10:42.062 { 00:10:42.062 "name": "BaseBdev3", 00:10:42.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.062 "is_configured": false, 00:10:42.062 "data_offset": 0, 00:10:42.062 "data_size": 0 00:10:42.062 }, 00:10:42.062 { 00:10:42.062 "name": "BaseBdev4", 00:10:42.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.062 "is_configured": false, 00:10:42.062 "data_offset": 0, 00:10:42.062 "data_size": 0 00:10:42.062 } 00:10:42.062 ] 00:10:42.062 }' 00:10:42.062 04:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.062 04:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.322 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:42.322 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.322 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.322 [2024-11-21 04:08:42.203377] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:42.322 [2024-11-21 04:08:42.203451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:42.322 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.322 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:42.322 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.322 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.322 [2024-11-21 04:08:42.215396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.322 [2024-11-21 04:08:42.217615] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:42.322 [2024-11-21 04:08:42.217745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:42.322 [2024-11-21 04:08:42.217761] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:42.322 [2024-11-21 04:08:42.217771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:42.322 [2024-11-21 04:08:42.217777] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:42.322 [2024-11-21 04:08:42.217785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:42.322 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.322 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:42.322 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:42.322 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:42.322 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.322 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.322 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.322 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.322 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.322 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.322 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.322 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.322 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.322 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.322 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.322 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.322 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.322 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.322 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.322 "name": "Existed_Raid", 00:10:42.322 "uuid": "89c1f13a-e3e1-4493-a13e-b5f772292cfc", 00:10:42.322 "strip_size_kb": 64, 00:10:42.322 "state": "configuring", 00:10:42.322 "raid_level": "concat", 00:10:42.322 "superblock": true, 00:10:42.322 "num_base_bdevs": 4, 00:10:42.322 "num_base_bdevs_discovered": 1, 00:10:42.322 "num_base_bdevs_operational": 4, 00:10:42.322 "base_bdevs_list": [ 00:10:42.322 { 00:10:42.322 "name": "BaseBdev1", 00:10:42.322 "uuid": "c8fe8930-3e23-4180-b54e-dceccad9091a", 00:10:42.322 "is_configured": true, 00:10:42.322 "data_offset": 2048, 00:10:42.322 "data_size": 63488 00:10:42.322 }, 00:10:42.322 { 00:10:42.323 "name": "BaseBdev2", 00:10:42.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.323 "is_configured": false, 00:10:42.323 "data_offset": 0, 00:10:42.323 "data_size": 0 00:10:42.323 }, 00:10:42.323 { 00:10:42.323 "name": "BaseBdev3", 00:10:42.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.323 "is_configured": false, 00:10:42.323 "data_offset": 0, 00:10:42.323 "data_size": 0 00:10:42.323 }, 00:10:42.323 { 00:10:42.323 "name": "BaseBdev4", 00:10:42.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.323 "is_configured": false, 00:10:42.323 "data_offset": 0, 00:10:42.323 "data_size": 0 00:10:42.323 } 00:10:42.323 ] 00:10:42.323 }' 00:10:42.323 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.323 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.912 [2024-11-21 04:08:42.659356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.912 BaseBdev2 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.912 [ 00:10:42.912 { 00:10:42.912 "name": "BaseBdev2", 00:10:42.912 "aliases": [ 00:10:42.912 "282605bd-dfcc-4b05-ad16-75332ac05a55" 00:10:42.912 ], 00:10:42.912 "product_name": "Malloc disk", 00:10:42.912 "block_size": 512, 00:10:42.912 "num_blocks": 65536, 00:10:42.912 "uuid": "282605bd-dfcc-4b05-ad16-75332ac05a55", 00:10:42.912 "assigned_rate_limits": { 00:10:42.912 "rw_ios_per_sec": 0, 00:10:42.912 "rw_mbytes_per_sec": 0, 00:10:42.912 "r_mbytes_per_sec": 0, 00:10:42.912 "w_mbytes_per_sec": 0 00:10:42.912 }, 00:10:42.912 "claimed": true, 00:10:42.912 "claim_type": "exclusive_write", 00:10:42.912 "zoned": false, 00:10:42.912 "supported_io_types": { 00:10:42.912 "read": true, 00:10:42.912 "write": true, 00:10:42.912 "unmap": true, 00:10:42.912 "flush": true, 00:10:42.912 "reset": true, 00:10:42.912 "nvme_admin": false, 00:10:42.912 "nvme_io": false, 00:10:42.912 "nvme_io_md": false, 00:10:42.912 "write_zeroes": true, 00:10:42.912 "zcopy": true, 00:10:42.912 "get_zone_info": false, 00:10:42.912 "zone_management": false, 00:10:42.912 "zone_append": false, 00:10:42.912 "compare": false, 00:10:42.912 "compare_and_write": false, 00:10:42.912 "abort": true, 00:10:42.912 "seek_hole": false, 00:10:42.912 "seek_data": false, 00:10:42.912 "copy": true, 00:10:42.912 "nvme_iov_md": false 00:10:42.912 }, 00:10:42.912 "memory_domains": [ 00:10:42.912 { 00:10:42.912 "dma_device_id": "system", 00:10:42.912 "dma_device_type": 1 00:10:42.912 }, 00:10:42.912 { 00:10:42.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.912 "dma_device_type": 2 00:10:42.912 } 00:10:42.912 ], 00:10:42.912 "driver_specific": {} 00:10:42.912 } 00:10:42.912 ] 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.912 "name": "Existed_Raid", 00:10:42.912 "uuid": "89c1f13a-e3e1-4493-a13e-b5f772292cfc", 00:10:42.912 "strip_size_kb": 64, 00:10:42.912 "state": "configuring", 00:10:42.912 "raid_level": "concat", 00:10:42.912 "superblock": true, 00:10:42.912 "num_base_bdevs": 4, 00:10:42.912 "num_base_bdevs_discovered": 2, 00:10:42.912 "num_base_bdevs_operational": 4, 00:10:42.912 "base_bdevs_list": [ 00:10:42.912 { 00:10:42.912 "name": "BaseBdev1", 00:10:42.912 "uuid": "c8fe8930-3e23-4180-b54e-dceccad9091a", 00:10:42.912 "is_configured": true, 00:10:42.912 "data_offset": 2048, 00:10:42.912 "data_size": 63488 00:10:42.912 }, 00:10:42.912 { 00:10:42.912 "name": "BaseBdev2", 00:10:42.912 "uuid": "282605bd-dfcc-4b05-ad16-75332ac05a55", 00:10:42.912 "is_configured": true, 00:10:42.912 "data_offset": 2048, 00:10:42.912 "data_size": 63488 00:10:42.912 }, 00:10:42.912 { 00:10:42.912 "name": "BaseBdev3", 00:10:42.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.912 "is_configured": false, 00:10:42.912 "data_offset": 0, 00:10:42.912 "data_size": 0 00:10:42.912 }, 00:10:42.912 { 00:10:42.912 "name": "BaseBdev4", 00:10:42.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.912 "is_configured": false, 00:10:42.912 "data_offset": 0, 00:10:42.912 "data_size": 0 00:10:42.912 } 00:10:42.912 ] 00:10:42.912 }' 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.912 04:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.173 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:43.173 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.173 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.173 [2024-11-21 04:08:43.107769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:43.173 BaseBdev3 00:10:43.173 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.173 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:43.173 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:43.173 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:43.173 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:43.173 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:43.173 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:43.173 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:43.173 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.173 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.173 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.173 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:43.173 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.173 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.173 [ 00:10:43.173 { 00:10:43.173 "name": "BaseBdev3", 00:10:43.173 "aliases": [ 00:10:43.173 "14dc2518-55af-4e28-aa69-7625e4f5d39c" 00:10:43.173 ], 00:10:43.173 "product_name": "Malloc disk", 00:10:43.173 "block_size": 512, 00:10:43.173 "num_blocks": 65536, 00:10:43.173 "uuid": "14dc2518-55af-4e28-aa69-7625e4f5d39c", 00:10:43.173 "assigned_rate_limits": { 00:10:43.173 "rw_ios_per_sec": 0, 00:10:43.173 "rw_mbytes_per_sec": 0, 00:10:43.173 "r_mbytes_per_sec": 0, 00:10:43.173 "w_mbytes_per_sec": 0 00:10:43.173 }, 00:10:43.173 "claimed": true, 00:10:43.173 "claim_type": "exclusive_write", 00:10:43.173 "zoned": false, 00:10:43.173 "supported_io_types": { 00:10:43.173 "read": true, 00:10:43.173 "write": true, 00:10:43.173 "unmap": true, 00:10:43.173 "flush": true, 00:10:43.173 "reset": true, 00:10:43.173 "nvme_admin": false, 00:10:43.173 "nvme_io": false, 00:10:43.173 "nvme_io_md": false, 00:10:43.173 "write_zeroes": true, 00:10:43.173 "zcopy": true, 00:10:43.173 "get_zone_info": false, 00:10:43.173 "zone_management": false, 00:10:43.173 "zone_append": false, 00:10:43.173 "compare": false, 00:10:43.173 "compare_and_write": false, 00:10:43.173 "abort": true, 00:10:43.173 "seek_hole": false, 00:10:43.173 "seek_data": false, 00:10:43.173 "copy": true, 00:10:43.173 "nvme_iov_md": false 00:10:43.173 }, 00:10:43.173 "memory_domains": [ 00:10:43.173 { 00:10:43.173 "dma_device_id": "system", 00:10:43.173 "dma_device_type": 1 00:10:43.173 }, 00:10:43.173 { 00:10:43.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.173 "dma_device_type": 2 00:10:43.173 } 00:10:43.173 ], 00:10:43.173 "driver_specific": {} 00:10:43.173 } 00:10:43.173 ] 00:10:43.433 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.433 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:43.433 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:43.433 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:43.433 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:43.433 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.433 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.433 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.433 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.433 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.433 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.433 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.434 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.434 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.434 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.434 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.434 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.434 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.434 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.434 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.434 "name": "Existed_Raid", 00:10:43.434 "uuid": "89c1f13a-e3e1-4493-a13e-b5f772292cfc", 00:10:43.434 "strip_size_kb": 64, 00:10:43.434 "state": "configuring", 00:10:43.434 "raid_level": "concat", 00:10:43.434 "superblock": true, 00:10:43.434 "num_base_bdevs": 4, 00:10:43.434 "num_base_bdevs_discovered": 3, 00:10:43.434 "num_base_bdevs_operational": 4, 00:10:43.434 "base_bdevs_list": [ 00:10:43.434 { 00:10:43.434 "name": "BaseBdev1", 00:10:43.434 "uuid": "c8fe8930-3e23-4180-b54e-dceccad9091a", 00:10:43.434 "is_configured": true, 00:10:43.434 "data_offset": 2048, 00:10:43.434 "data_size": 63488 00:10:43.434 }, 00:10:43.434 { 00:10:43.434 "name": "BaseBdev2", 00:10:43.434 "uuid": "282605bd-dfcc-4b05-ad16-75332ac05a55", 00:10:43.434 "is_configured": true, 00:10:43.434 "data_offset": 2048, 00:10:43.434 "data_size": 63488 00:10:43.434 }, 00:10:43.434 { 00:10:43.434 "name": "BaseBdev3", 00:10:43.434 "uuid": "14dc2518-55af-4e28-aa69-7625e4f5d39c", 00:10:43.434 "is_configured": true, 00:10:43.434 "data_offset": 2048, 00:10:43.434 "data_size": 63488 00:10:43.434 }, 00:10:43.434 { 00:10:43.434 "name": "BaseBdev4", 00:10:43.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.434 "is_configured": false, 00:10:43.434 "data_offset": 0, 00:10:43.434 "data_size": 0 00:10:43.434 } 00:10:43.434 ] 00:10:43.434 }' 00:10:43.434 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.434 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.694 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:43.694 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.694 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.694 [2024-11-21 04:08:43.588007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:43.694 [2024-11-21 04:08:43.588376] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:43.694 [2024-11-21 04:08:43.588397] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:43.694 [2024-11-21 04:08:43.588744] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:43.694 BaseBdev4 00:10:43.694 [2024-11-21 04:08:43.588892] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:43.694 [2024-11-21 04:08:43.588911] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:43.694 [2024-11-21 04:08:43.589040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.694 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.694 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:43.694 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:43.694 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:43.694 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:43.694 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:43.694 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:43.694 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:43.694 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.694 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.694 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.694 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:43.694 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.694 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.694 [ 00:10:43.694 { 00:10:43.694 "name": "BaseBdev4", 00:10:43.694 "aliases": [ 00:10:43.694 "10d13e74-60f6-44c5-b84c-48057e225543" 00:10:43.694 ], 00:10:43.694 "product_name": "Malloc disk", 00:10:43.694 "block_size": 512, 00:10:43.694 "num_blocks": 65536, 00:10:43.694 "uuid": "10d13e74-60f6-44c5-b84c-48057e225543", 00:10:43.694 "assigned_rate_limits": { 00:10:43.694 "rw_ios_per_sec": 0, 00:10:43.695 "rw_mbytes_per_sec": 0, 00:10:43.695 "r_mbytes_per_sec": 0, 00:10:43.695 "w_mbytes_per_sec": 0 00:10:43.695 }, 00:10:43.695 "claimed": true, 00:10:43.695 "claim_type": "exclusive_write", 00:10:43.695 "zoned": false, 00:10:43.695 "supported_io_types": { 00:10:43.695 "read": true, 00:10:43.695 "write": true, 00:10:43.695 "unmap": true, 00:10:43.695 "flush": true, 00:10:43.695 "reset": true, 00:10:43.695 "nvme_admin": false, 00:10:43.695 "nvme_io": false, 00:10:43.695 "nvme_io_md": false, 00:10:43.695 "write_zeroes": true, 00:10:43.695 "zcopy": true, 00:10:43.695 "get_zone_info": false, 00:10:43.695 "zone_management": false, 00:10:43.695 "zone_append": false, 00:10:43.695 "compare": false, 00:10:43.695 "compare_and_write": false, 00:10:43.695 "abort": true, 00:10:43.695 "seek_hole": false, 00:10:43.695 "seek_data": false, 00:10:43.695 "copy": true, 00:10:43.695 "nvme_iov_md": false 00:10:43.695 }, 00:10:43.695 "memory_domains": [ 00:10:43.695 { 00:10:43.695 "dma_device_id": "system", 00:10:43.695 "dma_device_type": 1 00:10:43.695 }, 00:10:43.695 { 00:10:43.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.695 "dma_device_type": 2 00:10:43.695 } 00:10:43.695 ], 00:10:43.695 "driver_specific": {} 00:10:43.695 } 00:10:43.695 ] 00:10:43.695 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.695 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:43.695 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:43.695 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:43.695 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:43.695 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.695 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.695 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.695 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.695 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.695 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.695 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.695 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.695 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.695 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.695 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.695 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.695 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.695 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.954 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.954 "name": "Existed_Raid", 00:10:43.954 "uuid": "89c1f13a-e3e1-4493-a13e-b5f772292cfc", 00:10:43.954 "strip_size_kb": 64, 00:10:43.954 "state": "online", 00:10:43.954 "raid_level": "concat", 00:10:43.954 "superblock": true, 00:10:43.954 "num_base_bdevs": 4, 00:10:43.954 "num_base_bdevs_discovered": 4, 00:10:43.954 "num_base_bdevs_operational": 4, 00:10:43.954 "base_bdevs_list": [ 00:10:43.954 { 00:10:43.954 "name": "BaseBdev1", 00:10:43.954 "uuid": "c8fe8930-3e23-4180-b54e-dceccad9091a", 00:10:43.954 "is_configured": true, 00:10:43.954 "data_offset": 2048, 00:10:43.954 "data_size": 63488 00:10:43.954 }, 00:10:43.954 { 00:10:43.954 "name": "BaseBdev2", 00:10:43.954 "uuid": "282605bd-dfcc-4b05-ad16-75332ac05a55", 00:10:43.954 "is_configured": true, 00:10:43.954 "data_offset": 2048, 00:10:43.954 "data_size": 63488 00:10:43.954 }, 00:10:43.954 { 00:10:43.954 "name": "BaseBdev3", 00:10:43.954 "uuid": "14dc2518-55af-4e28-aa69-7625e4f5d39c", 00:10:43.954 "is_configured": true, 00:10:43.954 "data_offset": 2048, 00:10:43.954 "data_size": 63488 00:10:43.954 }, 00:10:43.954 { 00:10:43.954 "name": "BaseBdev4", 00:10:43.954 "uuid": "10d13e74-60f6-44c5-b84c-48057e225543", 00:10:43.954 "is_configured": true, 00:10:43.954 "data_offset": 2048, 00:10:43.954 "data_size": 63488 00:10:43.954 } 00:10:43.954 ] 00:10:43.954 }' 00:10:43.954 04:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.955 04:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.214 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:44.214 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:44.214 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:44.214 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:44.214 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:44.214 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:44.214 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:44.214 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:44.214 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.214 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.214 [2024-11-21 04:08:44.087646] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.214 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.214 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:44.214 "name": "Existed_Raid", 00:10:44.214 "aliases": [ 00:10:44.214 "89c1f13a-e3e1-4493-a13e-b5f772292cfc" 00:10:44.214 ], 00:10:44.214 "product_name": "Raid Volume", 00:10:44.214 "block_size": 512, 00:10:44.214 "num_blocks": 253952, 00:10:44.214 "uuid": "89c1f13a-e3e1-4493-a13e-b5f772292cfc", 00:10:44.214 "assigned_rate_limits": { 00:10:44.214 "rw_ios_per_sec": 0, 00:10:44.214 "rw_mbytes_per_sec": 0, 00:10:44.214 "r_mbytes_per_sec": 0, 00:10:44.214 "w_mbytes_per_sec": 0 00:10:44.214 }, 00:10:44.214 "claimed": false, 00:10:44.214 "zoned": false, 00:10:44.214 "supported_io_types": { 00:10:44.214 "read": true, 00:10:44.214 "write": true, 00:10:44.214 "unmap": true, 00:10:44.214 "flush": true, 00:10:44.214 "reset": true, 00:10:44.214 "nvme_admin": false, 00:10:44.214 "nvme_io": false, 00:10:44.214 "nvme_io_md": false, 00:10:44.214 "write_zeroes": true, 00:10:44.214 "zcopy": false, 00:10:44.214 "get_zone_info": false, 00:10:44.214 "zone_management": false, 00:10:44.214 "zone_append": false, 00:10:44.214 "compare": false, 00:10:44.214 "compare_and_write": false, 00:10:44.214 "abort": false, 00:10:44.214 "seek_hole": false, 00:10:44.214 "seek_data": false, 00:10:44.214 "copy": false, 00:10:44.214 "nvme_iov_md": false 00:10:44.214 }, 00:10:44.214 "memory_domains": [ 00:10:44.214 { 00:10:44.214 "dma_device_id": "system", 00:10:44.214 "dma_device_type": 1 00:10:44.214 }, 00:10:44.214 { 00:10:44.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.214 "dma_device_type": 2 00:10:44.214 }, 00:10:44.214 { 00:10:44.214 "dma_device_id": "system", 00:10:44.214 "dma_device_type": 1 00:10:44.214 }, 00:10:44.214 { 00:10:44.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.214 "dma_device_type": 2 00:10:44.214 }, 00:10:44.214 { 00:10:44.214 "dma_device_id": "system", 00:10:44.214 "dma_device_type": 1 00:10:44.214 }, 00:10:44.214 { 00:10:44.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.215 "dma_device_type": 2 00:10:44.215 }, 00:10:44.215 { 00:10:44.215 "dma_device_id": "system", 00:10:44.215 "dma_device_type": 1 00:10:44.215 }, 00:10:44.215 { 00:10:44.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.215 "dma_device_type": 2 00:10:44.215 } 00:10:44.215 ], 00:10:44.215 "driver_specific": { 00:10:44.215 "raid": { 00:10:44.215 "uuid": "89c1f13a-e3e1-4493-a13e-b5f772292cfc", 00:10:44.215 "strip_size_kb": 64, 00:10:44.215 "state": "online", 00:10:44.215 "raid_level": "concat", 00:10:44.215 "superblock": true, 00:10:44.215 "num_base_bdevs": 4, 00:10:44.215 "num_base_bdevs_discovered": 4, 00:10:44.215 "num_base_bdevs_operational": 4, 00:10:44.215 "base_bdevs_list": [ 00:10:44.215 { 00:10:44.215 "name": "BaseBdev1", 00:10:44.215 "uuid": "c8fe8930-3e23-4180-b54e-dceccad9091a", 00:10:44.215 "is_configured": true, 00:10:44.215 "data_offset": 2048, 00:10:44.215 "data_size": 63488 00:10:44.215 }, 00:10:44.215 { 00:10:44.215 "name": "BaseBdev2", 00:10:44.215 "uuid": "282605bd-dfcc-4b05-ad16-75332ac05a55", 00:10:44.215 "is_configured": true, 00:10:44.215 "data_offset": 2048, 00:10:44.215 "data_size": 63488 00:10:44.215 }, 00:10:44.215 { 00:10:44.215 "name": "BaseBdev3", 00:10:44.215 "uuid": "14dc2518-55af-4e28-aa69-7625e4f5d39c", 00:10:44.215 "is_configured": true, 00:10:44.215 "data_offset": 2048, 00:10:44.215 "data_size": 63488 00:10:44.215 }, 00:10:44.215 { 00:10:44.215 "name": "BaseBdev4", 00:10:44.215 "uuid": "10d13e74-60f6-44c5-b84c-48057e225543", 00:10:44.215 "is_configured": true, 00:10:44.215 "data_offset": 2048, 00:10:44.215 "data_size": 63488 00:10:44.215 } 00:10:44.215 ] 00:10:44.215 } 00:10:44.215 } 00:10:44.215 }' 00:10:44.215 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:44.215 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:44.215 BaseBdev2 00:10:44.215 BaseBdev3 00:10:44.215 BaseBdev4' 00:10:44.215 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.475 [2024-11-21 04:08:44.402736] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:44.475 [2024-11-21 04:08:44.402774] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:44.475 [2024-11-21 04:08:44.402837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.475 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.476 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.476 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.476 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.476 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.476 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.476 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.476 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.476 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.736 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.736 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.736 "name": "Existed_Raid", 00:10:44.736 "uuid": "89c1f13a-e3e1-4493-a13e-b5f772292cfc", 00:10:44.736 "strip_size_kb": 64, 00:10:44.736 "state": "offline", 00:10:44.736 "raid_level": "concat", 00:10:44.736 "superblock": true, 00:10:44.736 "num_base_bdevs": 4, 00:10:44.736 "num_base_bdevs_discovered": 3, 00:10:44.736 "num_base_bdevs_operational": 3, 00:10:44.736 "base_bdevs_list": [ 00:10:44.736 { 00:10:44.736 "name": null, 00:10:44.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.736 "is_configured": false, 00:10:44.736 "data_offset": 0, 00:10:44.736 "data_size": 63488 00:10:44.736 }, 00:10:44.736 { 00:10:44.736 "name": "BaseBdev2", 00:10:44.736 "uuid": "282605bd-dfcc-4b05-ad16-75332ac05a55", 00:10:44.736 "is_configured": true, 00:10:44.736 "data_offset": 2048, 00:10:44.736 "data_size": 63488 00:10:44.736 }, 00:10:44.736 { 00:10:44.736 "name": "BaseBdev3", 00:10:44.736 "uuid": "14dc2518-55af-4e28-aa69-7625e4f5d39c", 00:10:44.736 "is_configured": true, 00:10:44.736 "data_offset": 2048, 00:10:44.736 "data_size": 63488 00:10:44.736 }, 00:10:44.736 { 00:10:44.736 "name": "BaseBdev4", 00:10:44.736 "uuid": "10d13e74-60f6-44c5-b84c-48057e225543", 00:10:44.736 "is_configured": true, 00:10:44.736 "data_offset": 2048, 00:10:44.736 "data_size": 63488 00:10:44.736 } 00:10:44.736 ] 00:10:44.736 }' 00:10:44.736 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.736 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.997 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:44.997 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:44.997 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.997 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.997 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:44.997 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.997 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.997 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:44.997 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:44.997 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:44.997 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.997 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.997 [2024-11-21 04:08:44.926909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:44.997 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.997 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:44.997 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:44.997 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.997 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.997 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.997 04:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:44.997 04:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.257 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:45.257 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:45.257 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:45.257 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.257 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.257 [2024-11-21 04:08:45.007615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:45.257 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.257 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:45.257 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:45.257 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.257 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.257 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:45.257 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.258 [2024-11-21 04:08:45.084320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:45.258 [2024-11-21 04:08:45.084473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.258 BaseBdev2 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.258 [ 00:10:45.258 { 00:10:45.258 "name": "BaseBdev2", 00:10:45.258 "aliases": [ 00:10:45.258 "5af1ae2d-f697-41b6-affd-4914d0c16553" 00:10:45.258 ], 00:10:45.258 "product_name": "Malloc disk", 00:10:45.258 "block_size": 512, 00:10:45.258 "num_blocks": 65536, 00:10:45.258 "uuid": "5af1ae2d-f697-41b6-affd-4914d0c16553", 00:10:45.258 "assigned_rate_limits": { 00:10:45.258 "rw_ios_per_sec": 0, 00:10:45.258 "rw_mbytes_per_sec": 0, 00:10:45.258 "r_mbytes_per_sec": 0, 00:10:45.258 "w_mbytes_per_sec": 0 00:10:45.258 }, 00:10:45.258 "claimed": false, 00:10:45.258 "zoned": false, 00:10:45.258 "supported_io_types": { 00:10:45.258 "read": true, 00:10:45.258 "write": true, 00:10:45.258 "unmap": true, 00:10:45.258 "flush": true, 00:10:45.258 "reset": true, 00:10:45.258 "nvme_admin": false, 00:10:45.258 "nvme_io": false, 00:10:45.258 "nvme_io_md": false, 00:10:45.258 "write_zeroes": true, 00:10:45.258 "zcopy": true, 00:10:45.258 "get_zone_info": false, 00:10:45.258 "zone_management": false, 00:10:45.258 "zone_append": false, 00:10:45.258 "compare": false, 00:10:45.258 "compare_and_write": false, 00:10:45.258 "abort": true, 00:10:45.258 "seek_hole": false, 00:10:45.258 "seek_data": false, 00:10:45.258 "copy": true, 00:10:45.258 "nvme_iov_md": false 00:10:45.258 }, 00:10:45.258 "memory_domains": [ 00:10:45.258 { 00:10:45.258 "dma_device_id": "system", 00:10:45.258 "dma_device_type": 1 00:10:45.258 }, 00:10:45.258 { 00:10:45.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.258 "dma_device_type": 2 00:10:45.258 } 00:10:45.258 ], 00:10:45.258 "driver_specific": {} 00:10:45.258 } 00:10:45.258 ] 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.258 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.519 BaseBdev3 00:10:45.519 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.519 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:45.519 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:45.519 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.519 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:45.519 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.519 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.519 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.519 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.519 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.519 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.519 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:45.519 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.519 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.519 [ 00:10:45.519 { 00:10:45.519 "name": "BaseBdev3", 00:10:45.519 "aliases": [ 00:10:45.519 "14f4c536-94a8-45cd-8b52-21c88d6f1b70" 00:10:45.519 ], 00:10:45.519 "product_name": "Malloc disk", 00:10:45.519 "block_size": 512, 00:10:45.519 "num_blocks": 65536, 00:10:45.519 "uuid": "14f4c536-94a8-45cd-8b52-21c88d6f1b70", 00:10:45.519 "assigned_rate_limits": { 00:10:45.519 "rw_ios_per_sec": 0, 00:10:45.519 "rw_mbytes_per_sec": 0, 00:10:45.519 "r_mbytes_per_sec": 0, 00:10:45.519 "w_mbytes_per_sec": 0 00:10:45.519 }, 00:10:45.519 "claimed": false, 00:10:45.519 "zoned": false, 00:10:45.519 "supported_io_types": { 00:10:45.519 "read": true, 00:10:45.519 "write": true, 00:10:45.519 "unmap": true, 00:10:45.519 "flush": true, 00:10:45.519 "reset": true, 00:10:45.519 "nvme_admin": false, 00:10:45.519 "nvme_io": false, 00:10:45.519 "nvme_io_md": false, 00:10:45.519 "write_zeroes": true, 00:10:45.519 "zcopy": true, 00:10:45.519 "get_zone_info": false, 00:10:45.519 "zone_management": false, 00:10:45.519 "zone_append": false, 00:10:45.519 "compare": false, 00:10:45.519 "compare_and_write": false, 00:10:45.519 "abort": true, 00:10:45.519 "seek_hole": false, 00:10:45.519 "seek_data": false, 00:10:45.519 "copy": true, 00:10:45.520 "nvme_iov_md": false 00:10:45.520 }, 00:10:45.520 "memory_domains": [ 00:10:45.520 { 00:10:45.520 "dma_device_id": "system", 00:10:45.520 "dma_device_type": 1 00:10:45.520 }, 00:10:45.520 { 00:10:45.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.520 "dma_device_type": 2 00:10:45.520 } 00:10:45.520 ], 00:10:45.520 "driver_specific": {} 00:10:45.520 } 00:10:45.520 ] 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.520 BaseBdev4 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.520 [ 00:10:45.520 { 00:10:45.520 "name": "BaseBdev4", 00:10:45.520 "aliases": [ 00:10:45.520 "7379c58b-f1e8-4c89-ba0a-e5468ff9c1b0" 00:10:45.520 ], 00:10:45.520 "product_name": "Malloc disk", 00:10:45.520 "block_size": 512, 00:10:45.520 "num_blocks": 65536, 00:10:45.520 "uuid": "7379c58b-f1e8-4c89-ba0a-e5468ff9c1b0", 00:10:45.520 "assigned_rate_limits": { 00:10:45.520 "rw_ios_per_sec": 0, 00:10:45.520 "rw_mbytes_per_sec": 0, 00:10:45.520 "r_mbytes_per_sec": 0, 00:10:45.520 "w_mbytes_per_sec": 0 00:10:45.520 }, 00:10:45.520 "claimed": false, 00:10:45.520 "zoned": false, 00:10:45.520 "supported_io_types": { 00:10:45.520 "read": true, 00:10:45.520 "write": true, 00:10:45.520 "unmap": true, 00:10:45.520 "flush": true, 00:10:45.520 "reset": true, 00:10:45.520 "nvme_admin": false, 00:10:45.520 "nvme_io": false, 00:10:45.520 "nvme_io_md": false, 00:10:45.520 "write_zeroes": true, 00:10:45.520 "zcopy": true, 00:10:45.520 "get_zone_info": false, 00:10:45.520 "zone_management": false, 00:10:45.520 "zone_append": false, 00:10:45.520 "compare": false, 00:10:45.520 "compare_and_write": false, 00:10:45.520 "abort": true, 00:10:45.520 "seek_hole": false, 00:10:45.520 "seek_data": false, 00:10:45.520 "copy": true, 00:10:45.520 "nvme_iov_md": false 00:10:45.520 }, 00:10:45.520 "memory_domains": [ 00:10:45.520 { 00:10:45.520 "dma_device_id": "system", 00:10:45.520 "dma_device_type": 1 00:10:45.520 }, 00:10:45.520 { 00:10:45.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.520 "dma_device_type": 2 00:10:45.520 } 00:10:45.520 ], 00:10:45.520 "driver_specific": {} 00:10:45.520 } 00:10:45.520 ] 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.520 [2024-11-21 04:08:45.341092] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:45.520 [2024-11-21 04:08:45.341226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:45.520 [2024-11-21 04:08:45.341291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.520 [2024-11-21 04:08:45.343468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:45.520 [2024-11-21 04:08:45.343559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.520 "name": "Existed_Raid", 00:10:45.520 "uuid": "4e3bda96-3311-44e5-98af-68e518adbd53", 00:10:45.520 "strip_size_kb": 64, 00:10:45.520 "state": "configuring", 00:10:45.520 "raid_level": "concat", 00:10:45.520 "superblock": true, 00:10:45.520 "num_base_bdevs": 4, 00:10:45.520 "num_base_bdevs_discovered": 3, 00:10:45.520 "num_base_bdevs_operational": 4, 00:10:45.520 "base_bdevs_list": [ 00:10:45.520 { 00:10:45.520 "name": "BaseBdev1", 00:10:45.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.520 "is_configured": false, 00:10:45.520 "data_offset": 0, 00:10:45.520 "data_size": 0 00:10:45.520 }, 00:10:45.520 { 00:10:45.520 "name": "BaseBdev2", 00:10:45.520 "uuid": "5af1ae2d-f697-41b6-affd-4914d0c16553", 00:10:45.520 "is_configured": true, 00:10:45.520 "data_offset": 2048, 00:10:45.520 "data_size": 63488 00:10:45.520 }, 00:10:45.520 { 00:10:45.520 "name": "BaseBdev3", 00:10:45.520 "uuid": "14f4c536-94a8-45cd-8b52-21c88d6f1b70", 00:10:45.520 "is_configured": true, 00:10:45.520 "data_offset": 2048, 00:10:45.520 "data_size": 63488 00:10:45.520 }, 00:10:45.520 { 00:10:45.520 "name": "BaseBdev4", 00:10:45.520 "uuid": "7379c58b-f1e8-4c89-ba0a-e5468ff9c1b0", 00:10:45.520 "is_configured": true, 00:10:45.520 "data_offset": 2048, 00:10:45.520 "data_size": 63488 00:10:45.520 } 00:10:45.520 ] 00:10:45.520 }' 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.520 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.090 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:46.090 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.090 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.090 [2024-11-21 04:08:45.764391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:46.090 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.090 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:46.090 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.090 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.090 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.090 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.090 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.090 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.090 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.090 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.090 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.090 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.090 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.090 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.090 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.090 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.090 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.090 "name": "Existed_Raid", 00:10:46.090 "uuid": "4e3bda96-3311-44e5-98af-68e518adbd53", 00:10:46.090 "strip_size_kb": 64, 00:10:46.090 "state": "configuring", 00:10:46.090 "raid_level": "concat", 00:10:46.090 "superblock": true, 00:10:46.090 "num_base_bdevs": 4, 00:10:46.090 "num_base_bdevs_discovered": 2, 00:10:46.090 "num_base_bdevs_operational": 4, 00:10:46.090 "base_bdevs_list": [ 00:10:46.090 { 00:10:46.090 "name": "BaseBdev1", 00:10:46.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.090 "is_configured": false, 00:10:46.090 "data_offset": 0, 00:10:46.090 "data_size": 0 00:10:46.090 }, 00:10:46.090 { 00:10:46.090 "name": null, 00:10:46.090 "uuid": "5af1ae2d-f697-41b6-affd-4914d0c16553", 00:10:46.090 "is_configured": false, 00:10:46.090 "data_offset": 0, 00:10:46.090 "data_size": 63488 00:10:46.090 }, 00:10:46.090 { 00:10:46.090 "name": "BaseBdev3", 00:10:46.090 "uuid": "14f4c536-94a8-45cd-8b52-21c88d6f1b70", 00:10:46.090 "is_configured": true, 00:10:46.090 "data_offset": 2048, 00:10:46.090 "data_size": 63488 00:10:46.090 }, 00:10:46.090 { 00:10:46.090 "name": "BaseBdev4", 00:10:46.090 "uuid": "7379c58b-f1e8-4c89-ba0a-e5468ff9c1b0", 00:10:46.090 "is_configured": true, 00:10:46.090 "data_offset": 2048, 00:10:46.090 "data_size": 63488 00:10:46.090 } 00:10:46.090 ] 00:10:46.090 }' 00:10:46.090 04:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.090 04:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.349 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.349 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:46.349 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.349 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.349 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.349 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:46.349 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:46.349 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.349 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.349 [2024-11-21 04:08:46.304393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:46.349 BaseBdev1 00:10:46.349 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.349 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:46.349 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:46.350 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.350 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:46.350 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.350 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.350 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.350 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.350 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.350 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.350 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:46.350 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.350 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.609 [ 00:10:46.609 { 00:10:46.609 "name": "BaseBdev1", 00:10:46.609 "aliases": [ 00:10:46.609 "819f1017-579b-42ab-a81b-69895076ea6c" 00:10:46.609 ], 00:10:46.609 "product_name": "Malloc disk", 00:10:46.609 "block_size": 512, 00:10:46.609 "num_blocks": 65536, 00:10:46.609 "uuid": "819f1017-579b-42ab-a81b-69895076ea6c", 00:10:46.609 "assigned_rate_limits": { 00:10:46.609 "rw_ios_per_sec": 0, 00:10:46.609 "rw_mbytes_per_sec": 0, 00:10:46.609 "r_mbytes_per_sec": 0, 00:10:46.609 "w_mbytes_per_sec": 0 00:10:46.609 }, 00:10:46.609 "claimed": true, 00:10:46.609 "claim_type": "exclusive_write", 00:10:46.609 "zoned": false, 00:10:46.609 "supported_io_types": { 00:10:46.609 "read": true, 00:10:46.609 "write": true, 00:10:46.609 "unmap": true, 00:10:46.609 "flush": true, 00:10:46.609 "reset": true, 00:10:46.609 "nvme_admin": false, 00:10:46.609 "nvme_io": false, 00:10:46.609 "nvme_io_md": false, 00:10:46.609 "write_zeroes": true, 00:10:46.609 "zcopy": true, 00:10:46.609 "get_zone_info": false, 00:10:46.609 "zone_management": false, 00:10:46.609 "zone_append": false, 00:10:46.609 "compare": false, 00:10:46.609 "compare_and_write": false, 00:10:46.609 "abort": true, 00:10:46.609 "seek_hole": false, 00:10:46.609 "seek_data": false, 00:10:46.609 "copy": true, 00:10:46.609 "nvme_iov_md": false 00:10:46.609 }, 00:10:46.609 "memory_domains": [ 00:10:46.609 { 00:10:46.609 "dma_device_id": "system", 00:10:46.609 "dma_device_type": 1 00:10:46.609 }, 00:10:46.609 { 00:10:46.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.610 "dma_device_type": 2 00:10:46.610 } 00:10:46.610 ], 00:10:46.610 "driver_specific": {} 00:10:46.610 } 00:10:46.610 ] 00:10:46.610 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.610 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:46.610 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:46.610 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.610 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.610 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.610 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.610 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.610 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.610 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.610 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.610 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.610 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.610 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.610 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.610 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.610 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.610 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.610 "name": "Existed_Raid", 00:10:46.610 "uuid": "4e3bda96-3311-44e5-98af-68e518adbd53", 00:10:46.610 "strip_size_kb": 64, 00:10:46.610 "state": "configuring", 00:10:46.610 "raid_level": "concat", 00:10:46.610 "superblock": true, 00:10:46.610 "num_base_bdevs": 4, 00:10:46.610 "num_base_bdevs_discovered": 3, 00:10:46.610 "num_base_bdevs_operational": 4, 00:10:46.610 "base_bdevs_list": [ 00:10:46.610 { 00:10:46.610 "name": "BaseBdev1", 00:10:46.610 "uuid": "819f1017-579b-42ab-a81b-69895076ea6c", 00:10:46.610 "is_configured": true, 00:10:46.610 "data_offset": 2048, 00:10:46.610 "data_size": 63488 00:10:46.610 }, 00:10:46.610 { 00:10:46.610 "name": null, 00:10:46.610 "uuid": "5af1ae2d-f697-41b6-affd-4914d0c16553", 00:10:46.610 "is_configured": false, 00:10:46.610 "data_offset": 0, 00:10:46.610 "data_size": 63488 00:10:46.610 }, 00:10:46.610 { 00:10:46.610 "name": "BaseBdev3", 00:10:46.610 "uuid": "14f4c536-94a8-45cd-8b52-21c88d6f1b70", 00:10:46.610 "is_configured": true, 00:10:46.610 "data_offset": 2048, 00:10:46.610 "data_size": 63488 00:10:46.610 }, 00:10:46.610 { 00:10:46.610 "name": "BaseBdev4", 00:10:46.610 "uuid": "7379c58b-f1e8-4c89-ba0a-e5468ff9c1b0", 00:10:46.610 "is_configured": true, 00:10:46.610 "data_offset": 2048, 00:10:46.610 "data_size": 63488 00:10:46.610 } 00:10:46.610 ] 00:10:46.610 }' 00:10:46.610 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.610 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.871 [2024-11-21 04:08:46.783641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.871 "name": "Existed_Raid", 00:10:46.871 "uuid": "4e3bda96-3311-44e5-98af-68e518adbd53", 00:10:46.871 "strip_size_kb": 64, 00:10:46.871 "state": "configuring", 00:10:46.871 "raid_level": "concat", 00:10:46.871 "superblock": true, 00:10:46.871 "num_base_bdevs": 4, 00:10:46.871 "num_base_bdevs_discovered": 2, 00:10:46.871 "num_base_bdevs_operational": 4, 00:10:46.871 "base_bdevs_list": [ 00:10:46.871 { 00:10:46.871 "name": "BaseBdev1", 00:10:46.871 "uuid": "819f1017-579b-42ab-a81b-69895076ea6c", 00:10:46.871 "is_configured": true, 00:10:46.871 "data_offset": 2048, 00:10:46.871 "data_size": 63488 00:10:46.871 }, 00:10:46.871 { 00:10:46.871 "name": null, 00:10:46.871 "uuid": "5af1ae2d-f697-41b6-affd-4914d0c16553", 00:10:46.871 "is_configured": false, 00:10:46.871 "data_offset": 0, 00:10:46.871 "data_size": 63488 00:10:46.871 }, 00:10:46.871 { 00:10:46.871 "name": null, 00:10:46.871 "uuid": "14f4c536-94a8-45cd-8b52-21c88d6f1b70", 00:10:46.871 "is_configured": false, 00:10:46.871 "data_offset": 0, 00:10:46.871 "data_size": 63488 00:10:46.871 }, 00:10:46.871 { 00:10:46.871 "name": "BaseBdev4", 00:10:46.871 "uuid": "7379c58b-f1e8-4c89-ba0a-e5468ff9c1b0", 00:10:46.871 "is_configured": true, 00:10:46.871 "data_offset": 2048, 00:10:46.871 "data_size": 63488 00:10:46.871 } 00:10:46.871 ] 00:10:46.871 }' 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.871 04:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.441 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.441 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:47.441 04:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.441 04:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.441 04:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.441 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:47.441 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:47.441 04:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.441 04:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.441 [2024-11-21 04:08:47.198974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:47.441 04:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.441 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:47.441 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.441 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.441 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.441 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.441 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.441 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.441 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.441 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.441 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.441 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.441 04:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.441 04:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.441 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.441 04:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.441 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.441 "name": "Existed_Raid", 00:10:47.441 "uuid": "4e3bda96-3311-44e5-98af-68e518adbd53", 00:10:47.441 "strip_size_kb": 64, 00:10:47.441 "state": "configuring", 00:10:47.441 "raid_level": "concat", 00:10:47.442 "superblock": true, 00:10:47.442 "num_base_bdevs": 4, 00:10:47.442 "num_base_bdevs_discovered": 3, 00:10:47.442 "num_base_bdevs_operational": 4, 00:10:47.442 "base_bdevs_list": [ 00:10:47.442 { 00:10:47.442 "name": "BaseBdev1", 00:10:47.442 "uuid": "819f1017-579b-42ab-a81b-69895076ea6c", 00:10:47.442 "is_configured": true, 00:10:47.442 "data_offset": 2048, 00:10:47.442 "data_size": 63488 00:10:47.442 }, 00:10:47.442 { 00:10:47.442 "name": null, 00:10:47.442 "uuid": "5af1ae2d-f697-41b6-affd-4914d0c16553", 00:10:47.442 "is_configured": false, 00:10:47.442 "data_offset": 0, 00:10:47.442 "data_size": 63488 00:10:47.442 }, 00:10:47.442 { 00:10:47.442 "name": "BaseBdev3", 00:10:47.442 "uuid": "14f4c536-94a8-45cd-8b52-21c88d6f1b70", 00:10:47.442 "is_configured": true, 00:10:47.442 "data_offset": 2048, 00:10:47.442 "data_size": 63488 00:10:47.442 }, 00:10:47.442 { 00:10:47.442 "name": "BaseBdev4", 00:10:47.442 "uuid": "7379c58b-f1e8-4c89-ba0a-e5468ff9c1b0", 00:10:47.442 "is_configured": true, 00:10:47.442 "data_offset": 2048, 00:10:47.442 "data_size": 63488 00:10:47.442 } 00:10:47.442 ] 00:10:47.442 }' 00:10:47.442 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.442 04:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.702 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.702 04:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.702 04:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.702 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:47.702 04:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.962 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:47.962 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:47.962 04:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.962 04:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.962 [2024-11-21 04:08:47.706132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:47.962 04:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.962 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:47.962 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.962 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.962 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.962 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.962 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.962 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.962 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.962 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.962 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.962 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.962 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.962 04:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.962 04:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.962 04:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.962 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.962 "name": "Existed_Raid", 00:10:47.962 "uuid": "4e3bda96-3311-44e5-98af-68e518adbd53", 00:10:47.962 "strip_size_kb": 64, 00:10:47.962 "state": "configuring", 00:10:47.962 "raid_level": "concat", 00:10:47.962 "superblock": true, 00:10:47.962 "num_base_bdevs": 4, 00:10:47.962 "num_base_bdevs_discovered": 2, 00:10:47.962 "num_base_bdevs_operational": 4, 00:10:47.962 "base_bdevs_list": [ 00:10:47.962 { 00:10:47.962 "name": null, 00:10:47.962 "uuid": "819f1017-579b-42ab-a81b-69895076ea6c", 00:10:47.962 "is_configured": false, 00:10:47.962 "data_offset": 0, 00:10:47.962 "data_size": 63488 00:10:47.962 }, 00:10:47.962 { 00:10:47.962 "name": null, 00:10:47.962 "uuid": "5af1ae2d-f697-41b6-affd-4914d0c16553", 00:10:47.962 "is_configured": false, 00:10:47.962 "data_offset": 0, 00:10:47.962 "data_size": 63488 00:10:47.962 }, 00:10:47.962 { 00:10:47.962 "name": "BaseBdev3", 00:10:47.962 "uuid": "14f4c536-94a8-45cd-8b52-21c88d6f1b70", 00:10:47.962 "is_configured": true, 00:10:47.962 "data_offset": 2048, 00:10:47.962 "data_size": 63488 00:10:47.962 }, 00:10:47.962 { 00:10:47.962 "name": "BaseBdev4", 00:10:47.962 "uuid": "7379c58b-f1e8-4c89-ba0a-e5468ff9c1b0", 00:10:47.962 "is_configured": true, 00:10:47.962 "data_offset": 2048, 00:10:47.962 "data_size": 63488 00:10:47.962 } 00:10:47.962 ] 00:10:47.962 }' 00:10:47.962 04:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.962 04:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.532 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:48.532 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.532 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.532 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.532 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.532 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:48.532 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:48.532 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.532 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.532 [2024-11-21 04:08:48.237395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:48.532 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.532 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:48.532 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.532 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.532 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.532 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.532 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.532 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.532 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.532 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.532 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.532 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.532 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.532 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.532 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.532 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.532 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.532 "name": "Existed_Raid", 00:10:48.532 "uuid": "4e3bda96-3311-44e5-98af-68e518adbd53", 00:10:48.532 "strip_size_kb": 64, 00:10:48.532 "state": "configuring", 00:10:48.532 "raid_level": "concat", 00:10:48.532 "superblock": true, 00:10:48.532 "num_base_bdevs": 4, 00:10:48.532 "num_base_bdevs_discovered": 3, 00:10:48.532 "num_base_bdevs_operational": 4, 00:10:48.532 "base_bdevs_list": [ 00:10:48.532 { 00:10:48.532 "name": null, 00:10:48.532 "uuid": "819f1017-579b-42ab-a81b-69895076ea6c", 00:10:48.532 "is_configured": false, 00:10:48.532 "data_offset": 0, 00:10:48.532 "data_size": 63488 00:10:48.532 }, 00:10:48.532 { 00:10:48.532 "name": "BaseBdev2", 00:10:48.532 "uuid": "5af1ae2d-f697-41b6-affd-4914d0c16553", 00:10:48.532 "is_configured": true, 00:10:48.532 "data_offset": 2048, 00:10:48.532 "data_size": 63488 00:10:48.532 }, 00:10:48.532 { 00:10:48.532 "name": "BaseBdev3", 00:10:48.532 "uuid": "14f4c536-94a8-45cd-8b52-21c88d6f1b70", 00:10:48.532 "is_configured": true, 00:10:48.532 "data_offset": 2048, 00:10:48.532 "data_size": 63488 00:10:48.532 }, 00:10:48.532 { 00:10:48.532 "name": "BaseBdev4", 00:10:48.532 "uuid": "7379c58b-f1e8-4c89-ba0a-e5468ff9c1b0", 00:10:48.532 "is_configured": true, 00:10:48.533 "data_offset": 2048, 00:10:48.533 "data_size": 63488 00:10:48.533 } 00:10:48.533 ] 00:10:48.533 }' 00:10:48.533 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.533 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 819f1017-579b-42ab-a81b-69895076ea6c 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.793 [2024-11-21 04:08:48.681381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:48.793 [2024-11-21 04:08:48.681698] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:48.793 [2024-11-21 04:08:48.681749] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:48.793 [2024-11-21 04:08:48.682081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:48.793 NewBaseBdev 00:10:48.793 [2024-11-21 04:08:48.682262] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:48.793 [2024-11-21 04:08:48.682278] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:48.793 [2024-11-21 04:08:48.682389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.793 [ 00:10:48.793 { 00:10:48.793 "name": "NewBaseBdev", 00:10:48.793 "aliases": [ 00:10:48.793 "819f1017-579b-42ab-a81b-69895076ea6c" 00:10:48.793 ], 00:10:48.793 "product_name": "Malloc disk", 00:10:48.793 "block_size": 512, 00:10:48.793 "num_blocks": 65536, 00:10:48.793 "uuid": "819f1017-579b-42ab-a81b-69895076ea6c", 00:10:48.793 "assigned_rate_limits": { 00:10:48.793 "rw_ios_per_sec": 0, 00:10:48.793 "rw_mbytes_per_sec": 0, 00:10:48.793 "r_mbytes_per_sec": 0, 00:10:48.793 "w_mbytes_per_sec": 0 00:10:48.793 }, 00:10:48.793 "claimed": true, 00:10:48.793 "claim_type": "exclusive_write", 00:10:48.793 "zoned": false, 00:10:48.793 "supported_io_types": { 00:10:48.793 "read": true, 00:10:48.793 "write": true, 00:10:48.793 "unmap": true, 00:10:48.793 "flush": true, 00:10:48.793 "reset": true, 00:10:48.793 "nvme_admin": false, 00:10:48.793 "nvme_io": false, 00:10:48.793 "nvme_io_md": false, 00:10:48.793 "write_zeroes": true, 00:10:48.793 "zcopy": true, 00:10:48.793 "get_zone_info": false, 00:10:48.793 "zone_management": false, 00:10:48.793 "zone_append": false, 00:10:48.793 "compare": false, 00:10:48.793 "compare_and_write": false, 00:10:48.793 "abort": true, 00:10:48.793 "seek_hole": false, 00:10:48.793 "seek_data": false, 00:10:48.793 "copy": true, 00:10:48.793 "nvme_iov_md": false 00:10:48.793 }, 00:10:48.793 "memory_domains": [ 00:10:48.793 { 00:10:48.793 "dma_device_id": "system", 00:10:48.793 "dma_device_type": 1 00:10:48.793 }, 00:10:48.793 { 00:10:48.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.793 "dma_device_type": 2 00:10:48.793 } 00:10:48.793 ], 00:10:48.793 "driver_specific": {} 00:10:48.793 } 00:10:48.793 ] 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.793 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.794 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.053 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.053 "name": "Existed_Raid", 00:10:49.053 "uuid": "4e3bda96-3311-44e5-98af-68e518adbd53", 00:10:49.053 "strip_size_kb": 64, 00:10:49.053 "state": "online", 00:10:49.053 "raid_level": "concat", 00:10:49.053 "superblock": true, 00:10:49.053 "num_base_bdevs": 4, 00:10:49.053 "num_base_bdevs_discovered": 4, 00:10:49.053 "num_base_bdevs_operational": 4, 00:10:49.053 "base_bdevs_list": [ 00:10:49.053 { 00:10:49.053 "name": "NewBaseBdev", 00:10:49.053 "uuid": "819f1017-579b-42ab-a81b-69895076ea6c", 00:10:49.053 "is_configured": true, 00:10:49.053 "data_offset": 2048, 00:10:49.053 "data_size": 63488 00:10:49.053 }, 00:10:49.053 { 00:10:49.053 "name": "BaseBdev2", 00:10:49.053 "uuid": "5af1ae2d-f697-41b6-affd-4914d0c16553", 00:10:49.053 "is_configured": true, 00:10:49.053 "data_offset": 2048, 00:10:49.053 "data_size": 63488 00:10:49.053 }, 00:10:49.053 { 00:10:49.053 "name": "BaseBdev3", 00:10:49.053 "uuid": "14f4c536-94a8-45cd-8b52-21c88d6f1b70", 00:10:49.053 "is_configured": true, 00:10:49.053 "data_offset": 2048, 00:10:49.053 "data_size": 63488 00:10:49.053 }, 00:10:49.053 { 00:10:49.053 "name": "BaseBdev4", 00:10:49.053 "uuid": "7379c58b-f1e8-4c89-ba0a-e5468ff9c1b0", 00:10:49.053 "is_configured": true, 00:10:49.053 "data_offset": 2048, 00:10:49.053 "data_size": 63488 00:10:49.053 } 00:10:49.053 ] 00:10:49.053 }' 00:10:49.053 04:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.053 04:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.317 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:49.317 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:49.317 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:49.317 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:49.317 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:49.317 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:49.317 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:49.317 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.317 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.317 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:49.317 [2024-11-21 04:08:49.109092] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.317 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.317 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:49.317 "name": "Existed_Raid", 00:10:49.317 "aliases": [ 00:10:49.317 "4e3bda96-3311-44e5-98af-68e518adbd53" 00:10:49.317 ], 00:10:49.317 "product_name": "Raid Volume", 00:10:49.317 "block_size": 512, 00:10:49.317 "num_blocks": 253952, 00:10:49.317 "uuid": "4e3bda96-3311-44e5-98af-68e518adbd53", 00:10:49.317 "assigned_rate_limits": { 00:10:49.317 "rw_ios_per_sec": 0, 00:10:49.317 "rw_mbytes_per_sec": 0, 00:10:49.317 "r_mbytes_per_sec": 0, 00:10:49.317 "w_mbytes_per_sec": 0 00:10:49.317 }, 00:10:49.317 "claimed": false, 00:10:49.317 "zoned": false, 00:10:49.317 "supported_io_types": { 00:10:49.317 "read": true, 00:10:49.317 "write": true, 00:10:49.317 "unmap": true, 00:10:49.317 "flush": true, 00:10:49.317 "reset": true, 00:10:49.317 "nvme_admin": false, 00:10:49.317 "nvme_io": false, 00:10:49.317 "nvme_io_md": false, 00:10:49.317 "write_zeroes": true, 00:10:49.317 "zcopy": false, 00:10:49.317 "get_zone_info": false, 00:10:49.317 "zone_management": false, 00:10:49.317 "zone_append": false, 00:10:49.317 "compare": false, 00:10:49.317 "compare_and_write": false, 00:10:49.317 "abort": false, 00:10:49.317 "seek_hole": false, 00:10:49.317 "seek_data": false, 00:10:49.317 "copy": false, 00:10:49.317 "nvme_iov_md": false 00:10:49.317 }, 00:10:49.317 "memory_domains": [ 00:10:49.317 { 00:10:49.317 "dma_device_id": "system", 00:10:49.317 "dma_device_type": 1 00:10:49.317 }, 00:10:49.317 { 00:10:49.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.317 "dma_device_type": 2 00:10:49.317 }, 00:10:49.317 { 00:10:49.317 "dma_device_id": "system", 00:10:49.317 "dma_device_type": 1 00:10:49.317 }, 00:10:49.317 { 00:10:49.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.317 "dma_device_type": 2 00:10:49.317 }, 00:10:49.317 { 00:10:49.317 "dma_device_id": "system", 00:10:49.317 "dma_device_type": 1 00:10:49.317 }, 00:10:49.317 { 00:10:49.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.317 "dma_device_type": 2 00:10:49.317 }, 00:10:49.317 { 00:10:49.317 "dma_device_id": "system", 00:10:49.317 "dma_device_type": 1 00:10:49.317 }, 00:10:49.317 { 00:10:49.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.317 "dma_device_type": 2 00:10:49.317 } 00:10:49.317 ], 00:10:49.317 "driver_specific": { 00:10:49.317 "raid": { 00:10:49.317 "uuid": "4e3bda96-3311-44e5-98af-68e518adbd53", 00:10:49.317 "strip_size_kb": 64, 00:10:49.317 "state": "online", 00:10:49.317 "raid_level": "concat", 00:10:49.317 "superblock": true, 00:10:49.317 "num_base_bdevs": 4, 00:10:49.317 "num_base_bdevs_discovered": 4, 00:10:49.317 "num_base_bdevs_operational": 4, 00:10:49.317 "base_bdevs_list": [ 00:10:49.317 { 00:10:49.317 "name": "NewBaseBdev", 00:10:49.317 "uuid": "819f1017-579b-42ab-a81b-69895076ea6c", 00:10:49.317 "is_configured": true, 00:10:49.317 "data_offset": 2048, 00:10:49.317 "data_size": 63488 00:10:49.317 }, 00:10:49.317 { 00:10:49.317 "name": "BaseBdev2", 00:10:49.317 "uuid": "5af1ae2d-f697-41b6-affd-4914d0c16553", 00:10:49.317 "is_configured": true, 00:10:49.317 "data_offset": 2048, 00:10:49.317 "data_size": 63488 00:10:49.317 }, 00:10:49.317 { 00:10:49.317 "name": "BaseBdev3", 00:10:49.317 "uuid": "14f4c536-94a8-45cd-8b52-21c88d6f1b70", 00:10:49.317 "is_configured": true, 00:10:49.317 "data_offset": 2048, 00:10:49.317 "data_size": 63488 00:10:49.317 }, 00:10:49.317 { 00:10:49.317 "name": "BaseBdev4", 00:10:49.317 "uuid": "7379c58b-f1e8-4c89-ba0a-e5468ff9c1b0", 00:10:49.317 "is_configured": true, 00:10:49.317 "data_offset": 2048, 00:10:49.317 "data_size": 63488 00:10:49.317 } 00:10:49.317 ] 00:10:49.317 } 00:10:49.317 } 00:10:49.317 }' 00:10:49.317 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:49.317 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:49.317 BaseBdev2 00:10:49.317 BaseBdev3 00:10:49.317 BaseBdev4' 00:10:49.317 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.317 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:49.318 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.318 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:49.318 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.318 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.318 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.318 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.318 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.318 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.318 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.318 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:49.318 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.318 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.318 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.591 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.591 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.591 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.591 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.591 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.591 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:49.591 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.591 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.591 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.591 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.591 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.591 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.591 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:49.591 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.591 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.591 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.591 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.591 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.591 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.591 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.591 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.591 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.591 [2024-11-21 04:08:49.396208] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.591 [2024-11-21 04:08:49.396257] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.591 [2024-11-21 04:08:49.396341] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.591 [2024-11-21 04:08:49.396422] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.591 [2024-11-21 04:08:49.396433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:49.591 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.591 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82847 00:10:49.591 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82847 ']' 00:10:49.592 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 82847 00:10:49.592 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:49.592 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.592 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82847 00:10:49.592 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.592 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.592 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82847' 00:10:49.592 killing process with pid 82847 00:10:49.592 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 82847 00:10:49.592 [2024-11-21 04:08:49.439607] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:49.592 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 82847 00:10:49.592 [2024-11-21 04:08:49.516890] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:50.163 ************************************ 00:10:50.163 END TEST raid_state_function_test_sb 00:10:50.163 04:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:50.163 00:10:50.163 real 0m9.519s 00:10:50.163 user 0m15.880s 00:10:50.163 sys 0m2.129s 00:10:50.163 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.163 04:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.163 ************************************ 00:10:50.163 04:08:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:50.163 04:08:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:50.163 04:08:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.163 04:08:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:50.163 ************************************ 00:10:50.163 START TEST raid_superblock_test 00:10:50.163 ************************************ 00:10:50.163 04:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:10:50.163 04:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:50.163 04:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:50.163 04:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:50.163 04:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:50.163 04:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:50.163 04:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:50.163 04:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:50.163 04:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:50.163 04:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:50.163 04:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:50.163 04:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:50.163 04:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:50.163 04:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:50.163 04:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:50.163 04:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:50.164 04:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:50.164 04:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83491 00:10:50.164 04:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:50.164 04:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83491 00:10:50.164 04:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83491 ']' 00:10:50.164 04:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.164 04:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.164 04:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.164 04:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.164 04:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.164 [2024-11-21 04:08:50.027607] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:10:50.164 [2024-11-21 04:08:50.028639] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83491 ] 00:10:50.424 [2024-11-21 04:08:50.202567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.424 [2024-11-21 04:08:50.243791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.424 [2024-11-21 04:08:50.319545] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:50.424 [2024-11-21 04:08:50.319671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:50.995 04:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.995 04:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:50.995 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:50.995 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:50.995 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:50.995 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:50.995 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:50.995 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:50.995 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:50.995 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:50.995 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:50.995 04:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.995 04:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.995 malloc1 00:10:50.995 04:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.995 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:50.995 04:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.995 04:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.995 [2024-11-21 04:08:50.889609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:50.995 [2024-11-21 04:08:50.889756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.995 [2024-11-21 04:08:50.889803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:50.995 [2024-11-21 04:08:50.889843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.996 [2024-11-21 04:08:50.892398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.996 [2024-11-21 04:08:50.892475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:50.996 pt1 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.996 malloc2 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.996 [2024-11-21 04:08:50.924135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:50.996 [2024-11-21 04:08:50.924193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.996 [2024-11-21 04:08:50.924210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:50.996 [2024-11-21 04:08:50.924237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.996 [2024-11-21 04:08:50.926582] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.996 [2024-11-21 04:08:50.926618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:50.996 pt2 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.996 malloc3 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.996 [2024-11-21 04:08:50.958674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:50.996 [2024-11-21 04:08:50.958811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.996 [2024-11-21 04:08:50.958852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:50.996 [2024-11-21 04:08:50.958885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.996 [2024-11-21 04:08:50.961275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.996 [2024-11-21 04:08:50.961347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:50.996 pt3 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:50.996 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:51.257 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:51.257 04:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.257 04:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.257 malloc4 00:10:51.257 04:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.257 04:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:51.257 04:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.257 04:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.257 [2024-11-21 04:08:51.004210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:51.257 [2024-11-21 04:08:51.004333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.257 [2024-11-21 04:08:51.004375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:51.257 [2024-11-21 04:08:51.004411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.257 [2024-11-21 04:08:51.006755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.257 [2024-11-21 04:08:51.006824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:51.257 pt4 00:10:51.257 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.257 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:51.257 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:51.257 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:51.257 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.257 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.257 [2024-11-21 04:08:51.016238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:51.257 [2024-11-21 04:08:51.018327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:51.257 [2024-11-21 04:08:51.018394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:51.257 [2024-11-21 04:08:51.018440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:51.257 [2024-11-21 04:08:51.018593] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:51.257 [2024-11-21 04:08:51.018611] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:51.257 [2024-11-21 04:08:51.018893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:51.257 [2024-11-21 04:08:51.019063] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:51.257 [2024-11-21 04:08:51.019085] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:51.257 [2024-11-21 04:08:51.019253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.257 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.257 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:51.257 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.257 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.257 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.257 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.257 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.257 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.257 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.257 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.257 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.257 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.257 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.257 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.257 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.257 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.257 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.257 "name": "raid_bdev1", 00:10:51.257 "uuid": "792d1337-5197-4740-8c56-35014478a5a6", 00:10:51.257 "strip_size_kb": 64, 00:10:51.257 "state": "online", 00:10:51.257 "raid_level": "concat", 00:10:51.257 "superblock": true, 00:10:51.257 "num_base_bdevs": 4, 00:10:51.257 "num_base_bdevs_discovered": 4, 00:10:51.257 "num_base_bdevs_operational": 4, 00:10:51.257 "base_bdevs_list": [ 00:10:51.257 { 00:10:51.257 "name": "pt1", 00:10:51.257 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:51.257 "is_configured": true, 00:10:51.257 "data_offset": 2048, 00:10:51.257 "data_size": 63488 00:10:51.257 }, 00:10:51.257 { 00:10:51.257 "name": "pt2", 00:10:51.257 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.257 "is_configured": true, 00:10:51.257 "data_offset": 2048, 00:10:51.257 "data_size": 63488 00:10:51.257 }, 00:10:51.257 { 00:10:51.257 "name": "pt3", 00:10:51.257 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.257 "is_configured": true, 00:10:51.257 "data_offset": 2048, 00:10:51.257 "data_size": 63488 00:10:51.257 }, 00:10:51.257 { 00:10:51.257 "name": "pt4", 00:10:51.257 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:51.257 "is_configured": true, 00:10:51.257 "data_offset": 2048, 00:10:51.257 "data_size": 63488 00:10:51.257 } 00:10:51.257 ] 00:10:51.257 }' 00:10:51.257 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.257 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.518 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:51.518 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:51.518 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:51.518 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:51.518 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:51.518 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:51.518 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:51.518 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.518 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.518 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:51.518 [2024-11-21 04:08:51.431883] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.518 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.518 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:51.518 "name": "raid_bdev1", 00:10:51.518 "aliases": [ 00:10:51.518 "792d1337-5197-4740-8c56-35014478a5a6" 00:10:51.518 ], 00:10:51.518 "product_name": "Raid Volume", 00:10:51.518 "block_size": 512, 00:10:51.518 "num_blocks": 253952, 00:10:51.518 "uuid": "792d1337-5197-4740-8c56-35014478a5a6", 00:10:51.518 "assigned_rate_limits": { 00:10:51.518 "rw_ios_per_sec": 0, 00:10:51.518 "rw_mbytes_per_sec": 0, 00:10:51.518 "r_mbytes_per_sec": 0, 00:10:51.518 "w_mbytes_per_sec": 0 00:10:51.518 }, 00:10:51.518 "claimed": false, 00:10:51.518 "zoned": false, 00:10:51.518 "supported_io_types": { 00:10:51.518 "read": true, 00:10:51.518 "write": true, 00:10:51.518 "unmap": true, 00:10:51.518 "flush": true, 00:10:51.518 "reset": true, 00:10:51.518 "nvme_admin": false, 00:10:51.518 "nvme_io": false, 00:10:51.518 "nvme_io_md": false, 00:10:51.518 "write_zeroes": true, 00:10:51.518 "zcopy": false, 00:10:51.518 "get_zone_info": false, 00:10:51.518 "zone_management": false, 00:10:51.518 "zone_append": false, 00:10:51.518 "compare": false, 00:10:51.518 "compare_and_write": false, 00:10:51.518 "abort": false, 00:10:51.518 "seek_hole": false, 00:10:51.518 "seek_data": false, 00:10:51.518 "copy": false, 00:10:51.518 "nvme_iov_md": false 00:10:51.518 }, 00:10:51.518 "memory_domains": [ 00:10:51.518 { 00:10:51.518 "dma_device_id": "system", 00:10:51.518 "dma_device_type": 1 00:10:51.518 }, 00:10:51.518 { 00:10:51.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.518 "dma_device_type": 2 00:10:51.518 }, 00:10:51.518 { 00:10:51.518 "dma_device_id": "system", 00:10:51.518 "dma_device_type": 1 00:10:51.518 }, 00:10:51.518 { 00:10:51.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.518 "dma_device_type": 2 00:10:51.518 }, 00:10:51.518 { 00:10:51.518 "dma_device_id": "system", 00:10:51.518 "dma_device_type": 1 00:10:51.518 }, 00:10:51.518 { 00:10:51.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.518 "dma_device_type": 2 00:10:51.518 }, 00:10:51.518 { 00:10:51.518 "dma_device_id": "system", 00:10:51.518 "dma_device_type": 1 00:10:51.518 }, 00:10:51.518 { 00:10:51.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.518 "dma_device_type": 2 00:10:51.518 } 00:10:51.518 ], 00:10:51.518 "driver_specific": { 00:10:51.518 "raid": { 00:10:51.518 "uuid": "792d1337-5197-4740-8c56-35014478a5a6", 00:10:51.518 "strip_size_kb": 64, 00:10:51.518 "state": "online", 00:10:51.518 "raid_level": "concat", 00:10:51.518 "superblock": true, 00:10:51.518 "num_base_bdevs": 4, 00:10:51.518 "num_base_bdevs_discovered": 4, 00:10:51.518 "num_base_bdevs_operational": 4, 00:10:51.518 "base_bdevs_list": [ 00:10:51.518 { 00:10:51.519 "name": "pt1", 00:10:51.519 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:51.519 "is_configured": true, 00:10:51.519 "data_offset": 2048, 00:10:51.519 "data_size": 63488 00:10:51.519 }, 00:10:51.519 { 00:10:51.519 "name": "pt2", 00:10:51.519 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.519 "is_configured": true, 00:10:51.519 "data_offset": 2048, 00:10:51.519 "data_size": 63488 00:10:51.519 }, 00:10:51.519 { 00:10:51.519 "name": "pt3", 00:10:51.519 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.519 "is_configured": true, 00:10:51.519 "data_offset": 2048, 00:10:51.519 "data_size": 63488 00:10:51.519 }, 00:10:51.519 { 00:10:51.519 "name": "pt4", 00:10:51.519 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:51.519 "is_configured": true, 00:10:51.519 "data_offset": 2048, 00:10:51.519 "data_size": 63488 00:10:51.519 } 00:10:51.519 ] 00:10:51.519 } 00:10:51.519 } 00:10:51.519 }' 00:10:51.519 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:51.779 pt2 00:10:51.779 pt3 00:10:51.779 pt4' 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.779 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.779 [2024-11-21 04:08:51.747265] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:52.040 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.040 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=792d1337-5197-4740-8c56-35014478a5a6 00:10:52.040 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 792d1337-5197-4740-8c56-35014478a5a6 ']' 00:10:52.040 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:52.040 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.040 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.040 [2024-11-21 04:08:51.790918] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:52.040 [2024-11-21 04:08:51.790968] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:52.040 [2024-11-21 04:08:51.791070] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:52.040 [2024-11-21 04:08:51.791171] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:52.040 [2024-11-21 04:08:51.791230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:10:52.040 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.040 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:52.040 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.040 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.041 [2024-11-21 04:08:51.934685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:52.041 [2024-11-21 04:08:51.936855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:52.041 [2024-11-21 04:08:51.936909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:52.041 [2024-11-21 04:08:51.936939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:52.041 [2024-11-21 04:08:51.936990] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:52.041 [2024-11-21 04:08:51.937034] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:52.041 [2024-11-21 04:08:51.937058] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:52.041 [2024-11-21 04:08:51.937074] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:52.041 [2024-11-21 04:08:51.937090] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:52.041 [2024-11-21 04:08:51.937100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:10:52.041 request: 00:10:52.041 { 00:10:52.041 "name": "raid_bdev1", 00:10:52.041 "raid_level": "concat", 00:10:52.041 "base_bdevs": [ 00:10:52.041 "malloc1", 00:10:52.041 "malloc2", 00:10:52.041 "malloc3", 00:10:52.041 "malloc4" 00:10:52.041 ], 00:10:52.041 "strip_size_kb": 64, 00:10:52.041 "superblock": false, 00:10:52.041 "method": "bdev_raid_create", 00:10:52.041 "req_id": 1 00:10:52.041 } 00:10:52.041 Got JSON-RPC error response 00:10:52.041 response: 00:10:52.041 { 00:10:52.041 "code": -17, 00:10:52.041 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:52.041 } 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.041 04:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.041 [2024-11-21 04:08:51.998550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:52.041 [2024-11-21 04:08:51.998602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.041 [2024-11-21 04:08:51.998624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:52.041 [2024-11-21 04:08:51.998633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.041 [2024-11-21 04:08:52.001109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.041 [2024-11-21 04:08:52.001143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:52.041 [2024-11-21 04:08:52.001230] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:52.041 [2024-11-21 04:08:52.001268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:52.041 pt1 00:10:52.041 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.041 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:52.041 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.041 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.041 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.041 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.041 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.041 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.041 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.041 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.041 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.041 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.041 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.041 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.301 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.302 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.302 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.302 "name": "raid_bdev1", 00:10:52.302 "uuid": "792d1337-5197-4740-8c56-35014478a5a6", 00:10:52.302 "strip_size_kb": 64, 00:10:52.302 "state": "configuring", 00:10:52.302 "raid_level": "concat", 00:10:52.302 "superblock": true, 00:10:52.302 "num_base_bdevs": 4, 00:10:52.302 "num_base_bdevs_discovered": 1, 00:10:52.302 "num_base_bdevs_operational": 4, 00:10:52.302 "base_bdevs_list": [ 00:10:52.302 { 00:10:52.302 "name": "pt1", 00:10:52.302 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:52.302 "is_configured": true, 00:10:52.302 "data_offset": 2048, 00:10:52.302 "data_size": 63488 00:10:52.302 }, 00:10:52.302 { 00:10:52.302 "name": null, 00:10:52.302 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:52.302 "is_configured": false, 00:10:52.302 "data_offset": 2048, 00:10:52.302 "data_size": 63488 00:10:52.302 }, 00:10:52.302 { 00:10:52.302 "name": null, 00:10:52.302 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:52.302 "is_configured": false, 00:10:52.302 "data_offset": 2048, 00:10:52.302 "data_size": 63488 00:10:52.302 }, 00:10:52.302 { 00:10:52.302 "name": null, 00:10:52.302 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:52.302 "is_configured": false, 00:10:52.302 "data_offset": 2048, 00:10:52.302 "data_size": 63488 00:10:52.302 } 00:10:52.302 ] 00:10:52.302 }' 00:10:52.302 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.302 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.562 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:52.562 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:52.562 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.562 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.562 [2024-11-21 04:08:52.473790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:52.562 [2024-11-21 04:08:52.473868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.562 [2024-11-21 04:08:52.473896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:52.562 [2024-11-21 04:08:52.473907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.562 [2024-11-21 04:08:52.474447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.562 [2024-11-21 04:08:52.474478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:52.562 [2024-11-21 04:08:52.474587] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:52.562 [2024-11-21 04:08:52.474619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:52.562 pt2 00:10:52.562 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.562 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:52.562 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.562 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.562 [2024-11-21 04:08:52.485770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:52.562 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.562 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:52.562 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.562 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.562 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.562 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.562 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.562 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.562 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.562 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.562 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.562 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.562 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.562 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.562 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.562 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.823 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.823 "name": "raid_bdev1", 00:10:52.823 "uuid": "792d1337-5197-4740-8c56-35014478a5a6", 00:10:52.823 "strip_size_kb": 64, 00:10:52.823 "state": "configuring", 00:10:52.823 "raid_level": "concat", 00:10:52.823 "superblock": true, 00:10:52.823 "num_base_bdevs": 4, 00:10:52.823 "num_base_bdevs_discovered": 1, 00:10:52.823 "num_base_bdevs_operational": 4, 00:10:52.823 "base_bdevs_list": [ 00:10:52.823 { 00:10:52.823 "name": "pt1", 00:10:52.823 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:52.823 "is_configured": true, 00:10:52.823 "data_offset": 2048, 00:10:52.823 "data_size": 63488 00:10:52.823 }, 00:10:52.823 { 00:10:52.823 "name": null, 00:10:52.823 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:52.823 "is_configured": false, 00:10:52.823 "data_offset": 0, 00:10:52.823 "data_size": 63488 00:10:52.823 }, 00:10:52.823 { 00:10:52.823 "name": null, 00:10:52.823 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:52.823 "is_configured": false, 00:10:52.823 "data_offset": 2048, 00:10:52.823 "data_size": 63488 00:10:52.823 }, 00:10:52.823 { 00:10:52.823 "name": null, 00:10:52.823 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:52.823 "is_configured": false, 00:10:52.823 "data_offset": 2048, 00:10:52.823 "data_size": 63488 00:10:52.823 } 00:10:52.823 ] 00:10:52.823 }' 00:10:52.823 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.823 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.082 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:53.082 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.083 [2024-11-21 04:08:52.881127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:53.083 [2024-11-21 04:08:52.881254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.083 [2024-11-21 04:08:52.881278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:53.083 [2024-11-21 04:08:52.881290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.083 [2024-11-21 04:08:52.881814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.083 [2024-11-21 04:08:52.881844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:53.083 [2024-11-21 04:08:52.881941] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:53.083 [2024-11-21 04:08:52.881992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:53.083 pt2 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.083 [2024-11-21 04:08:52.893035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:53.083 [2024-11-21 04:08:52.893102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.083 [2024-11-21 04:08:52.893123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:53.083 [2024-11-21 04:08:52.893135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.083 [2024-11-21 04:08:52.893625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.083 [2024-11-21 04:08:52.893654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:53.083 [2024-11-21 04:08:52.893731] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:53.083 [2024-11-21 04:08:52.893755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:53.083 pt3 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.083 [2024-11-21 04:08:52.905003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:53.083 [2024-11-21 04:08:52.905066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.083 [2024-11-21 04:08:52.905083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:53.083 [2024-11-21 04:08:52.905093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.083 [2024-11-21 04:08:52.905466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.083 [2024-11-21 04:08:52.905493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:53.083 [2024-11-21 04:08:52.905560] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:53.083 [2024-11-21 04:08:52.905591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:53.083 [2024-11-21 04:08:52.905749] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:53.083 [2024-11-21 04:08:52.905769] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:53.083 [2024-11-21 04:08:52.906027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:53.083 [2024-11-21 04:08:52.906158] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:53.083 [2024-11-21 04:08:52.906171] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:10:53.083 [2024-11-21 04:08:52.906312] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.083 pt4 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.083 "name": "raid_bdev1", 00:10:53.083 "uuid": "792d1337-5197-4740-8c56-35014478a5a6", 00:10:53.083 "strip_size_kb": 64, 00:10:53.083 "state": "online", 00:10:53.083 "raid_level": "concat", 00:10:53.083 "superblock": true, 00:10:53.083 "num_base_bdevs": 4, 00:10:53.083 "num_base_bdevs_discovered": 4, 00:10:53.083 "num_base_bdevs_operational": 4, 00:10:53.083 "base_bdevs_list": [ 00:10:53.083 { 00:10:53.083 "name": "pt1", 00:10:53.083 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:53.083 "is_configured": true, 00:10:53.083 "data_offset": 2048, 00:10:53.083 "data_size": 63488 00:10:53.083 }, 00:10:53.083 { 00:10:53.083 "name": "pt2", 00:10:53.083 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.083 "is_configured": true, 00:10:53.083 "data_offset": 2048, 00:10:53.083 "data_size": 63488 00:10:53.083 }, 00:10:53.083 { 00:10:53.083 "name": "pt3", 00:10:53.083 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:53.083 "is_configured": true, 00:10:53.083 "data_offset": 2048, 00:10:53.083 "data_size": 63488 00:10:53.083 }, 00:10:53.083 { 00:10:53.083 "name": "pt4", 00:10:53.083 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:53.083 "is_configured": true, 00:10:53.083 "data_offset": 2048, 00:10:53.083 "data_size": 63488 00:10:53.083 } 00:10:53.083 ] 00:10:53.083 }' 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.083 04:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:53.655 [2024-11-21 04:08:53.368648] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:53.655 "name": "raid_bdev1", 00:10:53.655 "aliases": [ 00:10:53.655 "792d1337-5197-4740-8c56-35014478a5a6" 00:10:53.655 ], 00:10:53.655 "product_name": "Raid Volume", 00:10:53.655 "block_size": 512, 00:10:53.655 "num_blocks": 253952, 00:10:53.655 "uuid": "792d1337-5197-4740-8c56-35014478a5a6", 00:10:53.655 "assigned_rate_limits": { 00:10:53.655 "rw_ios_per_sec": 0, 00:10:53.655 "rw_mbytes_per_sec": 0, 00:10:53.655 "r_mbytes_per_sec": 0, 00:10:53.655 "w_mbytes_per_sec": 0 00:10:53.655 }, 00:10:53.655 "claimed": false, 00:10:53.655 "zoned": false, 00:10:53.655 "supported_io_types": { 00:10:53.655 "read": true, 00:10:53.655 "write": true, 00:10:53.655 "unmap": true, 00:10:53.655 "flush": true, 00:10:53.655 "reset": true, 00:10:53.655 "nvme_admin": false, 00:10:53.655 "nvme_io": false, 00:10:53.655 "nvme_io_md": false, 00:10:53.655 "write_zeroes": true, 00:10:53.655 "zcopy": false, 00:10:53.655 "get_zone_info": false, 00:10:53.655 "zone_management": false, 00:10:53.655 "zone_append": false, 00:10:53.655 "compare": false, 00:10:53.655 "compare_and_write": false, 00:10:53.655 "abort": false, 00:10:53.655 "seek_hole": false, 00:10:53.655 "seek_data": false, 00:10:53.655 "copy": false, 00:10:53.655 "nvme_iov_md": false 00:10:53.655 }, 00:10:53.655 "memory_domains": [ 00:10:53.655 { 00:10:53.655 "dma_device_id": "system", 00:10:53.655 "dma_device_type": 1 00:10:53.655 }, 00:10:53.655 { 00:10:53.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.655 "dma_device_type": 2 00:10:53.655 }, 00:10:53.655 { 00:10:53.655 "dma_device_id": "system", 00:10:53.655 "dma_device_type": 1 00:10:53.655 }, 00:10:53.655 { 00:10:53.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.655 "dma_device_type": 2 00:10:53.655 }, 00:10:53.655 { 00:10:53.655 "dma_device_id": "system", 00:10:53.655 "dma_device_type": 1 00:10:53.655 }, 00:10:53.655 { 00:10:53.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.655 "dma_device_type": 2 00:10:53.655 }, 00:10:53.655 { 00:10:53.655 "dma_device_id": "system", 00:10:53.655 "dma_device_type": 1 00:10:53.655 }, 00:10:53.655 { 00:10:53.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.655 "dma_device_type": 2 00:10:53.655 } 00:10:53.655 ], 00:10:53.655 "driver_specific": { 00:10:53.655 "raid": { 00:10:53.655 "uuid": "792d1337-5197-4740-8c56-35014478a5a6", 00:10:53.655 "strip_size_kb": 64, 00:10:53.655 "state": "online", 00:10:53.655 "raid_level": "concat", 00:10:53.655 "superblock": true, 00:10:53.655 "num_base_bdevs": 4, 00:10:53.655 "num_base_bdevs_discovered": 4, 00:10:53.655 "num_base_bdevs_operational": 4, 00:10:53.655 "base_bdevs_list": [ 00:10:53.655 { 00:10:53.655 "name": "pt1", 00:10:53.655 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:53.655 "is_configured": true, 00:10:53.655 "data_offset": 2048, 00:10:53.655 "data_size": 63488 00:10:53.655 }, 00:10:53.655 { 00:10:53.655 "name": "pt2", 00:10:53.655 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.655 "is_configured": true, 00:10:53.655 "data_offset": 2048, 00:10:53.655 "data_size": 63488 00:10:53.655 }, 00:10:53.655 { 00:10:53.655 "name": "pt3", 00:10:53.655 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:53.655 "is_configured": true, 00:10:53.655 "data_offset": 2048, 00:10:53.655 "data_size": 63488 00:10:53.655 }, 00:10:53.655 { 00:10:53.655 "name": "pt4", 00:10:53.655 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:53.655 "is_configured": true, 00:10:53.655 "data_offset": 2048, 00:10:53.655 "data_size": 63488 00:10:53.655 } 00:10:53.655 ] 00:10:53.655 } 00:10:53.655 } 00:10:53.655 }' 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:53.655 pt2 00:10:53.655 pt3 00:10:53.655 pt4' 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:53.655 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.656 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.656 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.656 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.656 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.656 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.656 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.656 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.656 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:53.656 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.656 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.916 [2024-11-21 04:08:53.703982] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 792d1337-5197-4740-8c56-35014478a5a6 '!=' 792d1337-5197-4740-8c56-35014478a5a6 ']' 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83491 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83491 ']' 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83491 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83491 00:10:53.916 killing process with pid 83491 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83491' 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 83491 00:10:53.916 [2024-11-21 04:08:53.777027] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:53.916 [2024-11-21 04:08:53.777164] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.916 04:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 83491 00:10:53.916 [2024-11-21 04:08:53.777271] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.916 [2024-11-21 04:08:53.777286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:10:53.916 [2024-11-21 04:08:53.857554] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:54.486 04:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:54.486 00:10:54.486 real 0m4.269s 00:10:54.486 user 0m6.515s 00:10:54.486 sys 0m1.041s 00:10:54.486 04:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.486 04:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.486 ************************************ 00:10:54.486 END TEST raid_superblock_test 00:10:54.486 ************************************ 00:10:54.486 04:08:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:10:54.486 04:08:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:54.486 04:08:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.486 04:08:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:54.486 ************************************ 00:10:54.486 START TEST raid_read_error_test 00:10:54.486 ************************************ 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.o5TpWqrSAA 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83739 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83739 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 83739 ']' 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:54.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:54.486 04:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.486 [2024-11-21 04:08:54.358338] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:10:54.486 [2024-11-21 04:08:54.358461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83739 ] 00:10:54.746 [2024-11-21 04:08:54.513058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.746 [2024-11-21 04:08:54.552871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.746 [2024-11-21 04:08:54.628460] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.746 [2024-11-21 04:08:54.628504] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.315 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.315 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:55.315 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.315 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:55.315 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.315 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.315 BaseBdev1_malloc 00:10:55.315 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.315 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:55.315 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.315 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.315 true 00:10:55.315 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.315 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:55.315 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.315 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.315 [2024-11-21 04:08:55.246102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:55.315 [2024-11-21 04:08:55.246468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.315 [2024-11-21 04:08:55.246508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:55.315 [2024-11-21 04:08:55.246518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.315 [2024-11-21 04:08:55.249199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.315 [2024-11-21 04:08:55.249308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:55.315 BaseBdev1 00:10:55.315 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.316 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.316 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:55.316 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.316 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.316 BaseBdev2_malloc 00:10:55.316 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.316 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:55.316 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.316 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.316 true 00:10:55.316 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.316 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:55.316 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.316 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.576 [2024-11-21 04:08:55.292671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:55.576 [2024-11-21 04:08:55.292882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.576 [2024-11-21 04:08:55.292942] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:55.576 [2024-11-21 04:08:55.293001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.576 [2024-11-21 04:08:55.295509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.576 [2024-11-21 04:08:55.295616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:55.576 BaseBdev2 00:10:55.576 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.576 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.576 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:55.576 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.576 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.576 BaseBdev3_malloc 00:10:55.576 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.576 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:55.576 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.576 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.576 true 00:10:55.576 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.576 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:55.576 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.576 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.576 [2024-11-21 04:08:55.339438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:55.576 [2024-11-21 04:08:55.339636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.576 [2024-11-21 04:08:55.339696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:55.576 [2024-11-21 04:08:55.339763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.576 [2024-11-21 04:08:55.342208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.576 [2024-11-21 04:08:55.342319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:55.576 BaseBdev3 00:10:55.576 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.576 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.576 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:55.576 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.576 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.576 BaseBdev4_malloc 00:10:55.576 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.576 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:55.576 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.577 true 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.577 [2024-11-21 04:08:55.396878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:55.577 [2024-11-21 04:08:55.397308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.577 [2024-11-21 04:08:55.397416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:55.577 [2024-11-21 04:08:55.397467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.577 [2024-11-21 04:08:55.399874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.577 [2024-11-21 04:08:55.399978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:55.577 BaseBdev4 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.577 [2024-11-21 04:08:55.408930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.577 [2024-11-21 04:08:55.411084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.577 [2024-11-21 04:08:55.411167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.577 [2024-11-21 04:08:55.411238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:55.577 [2024-11-21 04:08:55.411456] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:55.577 [2024-11-21 04:08:55.411475] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:55.577 [2024-11-21 04:08:55.411751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:10:55.577 [2024-11-21 04:08:55.411900] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:55.577 [2024-11-21 04:08:55.411919] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:55.577 [2024-11-21 04:08:55.412089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.577 "name": "raid_bdev1", 00:10:55.577 "uuid": "99af68c4-a2fd-4920-9453-de062cfa8753", 00:10:55.577 "strip_size_kb": 64, 00:10:55.577 "state": "online", 00:10:55.577 "raid_level": "concat", 00:10:55.577 "superblock": true, 00:10:55.577 "num_base_bdevs": 4, 00:10:55.577 "num_base_bdevs_discovered": 4, 00:10:55.577 "num_base_bdevs_operational": 4, 00:10:55.577 "base_bdevs_list": [ 00:10:55.577 { 00:10:55.577 "name": "BaseBdev1", 00:10:55.577 "uuid": "984705d6-1165-5cf6-b628-5d6e477837ae", 00:10:55.577 "is_configured": true, 00:10:55.577 "data_offset": 2048, 00:10:55.577 "data_size": 63488 00:10:55.577 }, 00:10:55.577 { 00:10:55.577 "name": "BaseBdev2", 00:10:55.577 "uuid": "4e80b0ee-e5bb-5b4a-abc0-721f229eafbf", 00:10:55.577 "is_configured": true, 00:10:55.577 "data_offset": 2048, 00:10:55.577 "data_size": 63488 00:10:55.577 }, 00:10:55.577 { 00:10:55.577 "name": "BaseBdev3", 00:10:55.577 "uuid": "ee56bef6-6d23-55ee-a22b-4bcef38702f2", 00:10:55.577 "is_configured": true, 00:10:55.577 "data_offset": 2048, 00:10:55.577 "data_size": 63488 00:10:55.577 }, 00:10:55.577 { 00:10:55.577 "name": "BaseBdev4", 00:10:55.577 "uuid": "68dd62ff-d8d9-57eb-991d-15a47c53dd5b", 00:10:55.577 "is_configured": true, 00:10:55.577 "data_offset": 2048, 00:10:55.577 "data_size": 63488 00:10:55.577 } 00:10:55.577 ] 00:10:55.577 }' 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.577 04:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.146 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:56.146 04:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:56.146 [2024-11-21 04:08:55.944592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:10:57.084 04:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:57.084 04:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.084 04:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.084 04:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.084 04:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:57.084 04:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:57.084 04:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:57.084 04:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:57.084 04:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.084 04:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.084 04:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.084 04:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.084 04:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.084 04:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.084 04:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.084 04:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.084 04:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.085 04:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.085 04:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.085 04:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.085 04:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.085 04:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.085 04:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.085 "name": "raid_bdev1", 00:10:57.085 "uuid": "99af68c4-a2fd-4920-9453-de062cfa8753", 00:10:57.085 "strip_size_kb": 64, 00:10:57.085 "state": "online", 00:10:57.085 "raid_level": "concat", 00:10:57.085 "superblock": true, 00:10:57.085 "num_base_bdevs": 4, 00:10:57.085 "num_base_bdevs_discovered": 4, 00:10:57.085 "num_base_bdevs_operational": 4, 00:10:57.085 "base_bdevs_list": [ 00:10:57.085 { 00:10:57.085 "name": "BaseBdev1", 00:10:57.085 "uuid": "984705d6-1165-5cf6-b628-5d6e477837ae", 00:10:57.085 "is_configured": true, 00:10:57.085 "data_offset": 2048, 00:10:57.085 "data_size": 63488 00:10:57.085 }, 00:10:57.085 { 00:10:57.085 "name": "BaseBdev2", 00:10:57.085 "uuid": "4e80b0ee-e5bb-5b4a-abc0-721f229eafbf", 00:10:57.085 "is_configured": true, 00:10:57.085 "data_offset": 2048, 00:10:57.085 "data_size": 63488 00:10:57.085 }, 00:10:57.085 { 00:10:57.085 "name": "BaseBdev3", 00:10:57.085 "uuid": "ee56bef6-6d23-55ee-a22b-4bcef38702f2", 00:10:57.085 "is_configured": true, 00:10:57.085 "data_offset": 2048, 00:10:57.085 "data_size": 63488 00:10:57.085 }, 00:10:57.085 { 00:10:57.085 "name": "BaseBdev4", 00:10:57.085 "uuid": "68dd62ff-d8d9-57eb-991d-15a47c53dd5b", 00:10:57.085 "is_configured": true, 00:10:57.085 "data_offset": 2048, 00:10:57.085 "data_size": 63488 00:10:57.085 } 00:10:57.085 ] 00:10:57.085 }' 00:10:57.085 04:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.085 04:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.345 04:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:57.345 04:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.345 04:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.345 [2024-11-21 04:08:57.300846] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:57.345 [2024-11-21 04:08:57.300902] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.345 [2024-11-21 04:08:57.303571] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.345 [2024-11-21 04:08:57.303631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.345 [2024-11-21 04:08:57.303685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.345 [2024-11-21 04:08:57.303696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:57.345 { 00:10:57.345 "results": [ 00:10:57.345 { 00:10:57.345 "job": "raid_bdev1", 00:10:57.345 "core_mask": "0x1", 00:10:57.345 "workload": "randrw", 00:10:57.345 "percentage": 50, 00:10:57.345 "status": "finished", 00:10:57.345 "queue_depth": 1, 00:10:57.345 "io_size": 131072, 00:10:57.345 "runtime": 1.356812, 00:10:57.345 "iops": 14326.966447820332, 00:10:57.345 "mibps": 1790.8708059775415, 00:10:57.345 "io_failed": 1, 00:10:57.345 "io_timeout": 0, 00:10:57.345 "avg_latency_us": 98.07663306197998, 00:10:57.345 "min_latency_us": 24.593886462882097, 00:10:57.345 "max_latency_us": 1380.8349344978167 00:10:57.345 } 00:10:57.345 ], 00:10:57.345 "core_count": 1 00:10:57.345 } 00:10:57.345 04:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.345 04:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83739 00:10:57.345 04:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 83739 ']' 00:10:57.345 04:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 83739 00:10:57.345 04:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:57.345 04:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.345 04:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83739 00:10:57.606 04:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:57.606 04:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:57.606 04:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83739' 00:10:57.606 killing process with pid 83739 00:10:57.606 04:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 83739 00:10:57.606 [2024-11-21 04:08:57.334663] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:57.606 04:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 83739 00:10:57.606 [2024-11-21 04:08:57.401976] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:57.866 04:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.o5TpWqrSAA 00:10:57.866 04:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:57.866 04:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:57.866 04:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:57.866 04:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:57.866 04:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:57.866 04:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:57.866 04:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:57.866 00:10:57.866 real 0m3.480s 00:10:57.866 user 0m4.230s 00:10:57.866 sys 0m0.643s 00:10:57.866 04:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.866 04:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.866 ************************************ 00:10:57.866 END TEST raid_read_error_test 00:10:57.866 ************************************ 00:10:57.866 04:08:57 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:10:57.866 04:08:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:57.866 04:08:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.866 04:08:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:57.866 ************************************ 00:10:57.866 START TEST raid_write_error_test 00:10:57.866 ************************************ 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ulbZSMVIe4 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83879 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83879 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 83879 ']' 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.866 04:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.127 [2024-11-21 04:08:57.921135] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:10:58.127 [2024-11-21 04:08:57.921315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83879 ] 00:10:58.127 [2024-11-21 04:08:58.080588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.387 [2024-11-21 04:08:58.121651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.387 [2024-11-21 04:08:58.197577] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.387 [2024-11-21 04:08:58.197625] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.957 BaseBdev1_malloc 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.957 true 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.957 [2024-11-21 04:08:58.779765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:58.957 [2024-11-21 04:08:58.779822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.957 [2024-11-21 04:08:58.779847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:58.957 [2024-11-21 04:08:58.779857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.957 [2024-11-21 04:08:58.782384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.957 [2024-11-21 04:08:58.782416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:58.957 BaseBdev1 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.957 BaseBdev2_malloc 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.957 true 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.957 [2024-11-21 04:08:58.826403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:58.957 [2024-11-21 04:08:58.826455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.957 [2024-11-21 04:08:58.826476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:58.957 [2024-11-21 04:08:58.826493] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.957 [2024-11-21 04:08:58.828857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.957 [2024-11-21 04:08:58.828893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:58.957 BaseBdev2 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.957 BaseBdev3_malloc 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.957 true 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.957 [2024-11-21 04:08:58.872946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:58.957 [2024-11-21 04:08:58.872994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.957 [2024-11-21 04:08:58.873014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:58.957 [2024-11-21 04:08:58.873023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.957 [2024-11-21 04:08:58.875382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.957 [2024-11-21 04:08:58.875412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:58.957 BaseBdev3 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.957 BaseBdev4_malloc 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.957 true 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.957 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.219 [2024-11-21 04:08:58.928483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:59.219 [2024-11-21 04:08:58.928540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.219 [2024-11-21 04:08:58.928566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:59.219 [2024-11-21 04:08:58.928575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.219 [2024-11-21 04:08:58.930924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.219 [2024-11-21 04:08:58.930956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:59.219 BaseBdev4 00:10:59.219 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.219 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:59.219 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.219 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.219 [2024-11-21 04:08:58.940519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:59.219 [2024-11-21 04:08:58.942587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:59.219 [2024-11-21 04:08:58.942665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.219 [2024-11-21 04:08:58.942721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:59.219 [2024-11-21 04:08:58.942918] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:59.219 [2024-11-21 04:08:58.942936] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:59.219 [2024-11-21 04:08:58.943208] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:10:59.219 [2024-11-21 04:08:58.943395] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:59.219 [2024-11-21 04:08:58.943416] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:59.219 [2024-11-21 04:08:58.943546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.219 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.219 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:59.219 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.219 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.219 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.219 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.219 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.219 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.219 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.219 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.219 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.219 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.219 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.219 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.219 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.219 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.219 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.219 "name": "raid_bdev1", 00:10:59.219 "uuid": "95850c67-d13c-49c0-b8f5-7bc5228f4d72", 00:10:59.219 "strip_size_kb": 64, 00:10:59.219 "state": "online", 00:10:59.219 "raid_level": "concat", 00:10:59.219 "superblock": true, 00:10:59.219 "num_base_bdevs": 4, 00:10:59.219 "num_base_bdevs_discovered": 4, 00:10:59.219 "num_base_bdevs_operational": 4, 00:10:59.219 "base_bdevs_list": [ 00:10:59.219 { 00:10:59.219 "name": "BaseBdev1", 00:10:59.219 "uuid": "e0684704-9014-5f1e-a513-38df5ec48a33", 00:10:59.219 "is_configured": true, 00:10:59.219 "data_offset": 2048, 00:10:59.219 "data_size": 63488 00:10:59.219 }, 00:10:59.219 { 00:10:59.219 "name": "BaseBdev2", 00:10:59.219 "uuid": "68b3ce89-9962-5bce-98df-cd86d8932b75", 00:10:59.219 "is_configured": true, 00:10:59.219 "data_offset": 2048, 00:10:59.219 "data_size": 63488 00:10:59.219 }, 00:10:59.219 { 00:10:59.219 "name": "BaseBdev3", 00:10:59.219 "uuid": "6089c9a7-09ce-50c4-99ff-e1518bdc6de5", 00:10:59.219 "is_configured": true, 00:10:59.219 "data_offset": 2048, 00:10:59.219 "data_size": 63488 00:10:59.219 }, 00:10:59.219 { 00:10:59.219 "name": "BaseBdev4", 00:10:59.219 "uuid": "8b2c1df6-044d-51fe-a1d4-9fb70d32d224", 00:10:59.219 "is_configured": true, 00:10:59.219 "data_offset": 2048, 00:10:59.219 "data_size": 63488 00:10:59.219 } 00:10:59.219 ] 00:10:59.219 }' 00:10:59.219 04:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.219 04:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.479 04:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:59.479 04:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:59.738 [2024-11-21 04:08:59.548007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:11:00.677 04:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:00.677 04:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.677 04:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.677 04:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.677 04:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:00.677 04:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:00.677 04:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:00.677 04:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:00.677 04:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.677 04:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.677 04:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.677 04:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.677 04:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.677 04:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.677 04:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.677 04:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.677 04:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.677 04:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.677 04:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.677 04:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.677 04:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.677 04:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.677 04:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.677 "name": "raid_bdev1", 00:11:00.677 "uuid": "95850c67-d13c-49c0-b8f5-7bc5228f4d72", 00:11:00.677 "strip_size_kb": 64, 00:11:00.677 "state": "online", 00:11:00.677 "raid_level": "concat", 00:11:00.677 "superblock": true, 00:11:00.677 "num_base_bdevs": 4, 00:11:00.677 "num_base_bdevs_discovered": 4, 00:11:00.677 "num_base_bdevs_operational": 4, 00:11:00.677 "base_bdevs_list": [ 00:11:00.677 { 00:11:00.677 "name": "BaseBdev1", 00:11:00.677 "uuid": "e0684704-9014-5f1e-a513-38df5ec48a33", 00:11:00.677 "is_configured": true, 00:11:00.677 "data_offset": 2048, 00:11:00.677 "data_size": 63488 00:11:00.677 }, 00:11:00.677 { 00:11:00.677 "name": "BaseBdev2", 00:11:00.677 "uuid": "68b3ce89-9962-5bce-98df-cd86d8932b75", 00:11:00.677 "is_configured": true, 00:11:00.677 "data_offset": 2048, 00:11:00.677 "data_size": 63488 00:11:00.677 }, 00:11:00.677 { 00:11:00.677 "name": "BaseBdev3", 00:11:00.677 "uuid": "6089c9a7-09ce-50c4-99ff-e1518bdc6de5", 00:11:00.677 "is_configured": true, 00:11:00.677 "data_offset": 2048, 00:11:00.677 "data_size": 63488 00:11:00.677 }, 00:11:00.677 { 00:11:00.677 "name": "BaseBdev4", 00:11:00.677 "uuid": "8b2c1df6-044d-51fe-a1d4-9fb70d32d224", 00:11:00.677 "is_configured": true, 00:11:00.677 "data_offset": 2048, 00:11:00.677 "data_size": 63488 00:11:00.677 } 00:11:00.677 ] 00:11:00.677 }' 00:11:00.678 04:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.678 04:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.248 04:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:01.248 04:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.248 04:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.248 [2024-11-21 04:09:00.945051] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:01.248 [2024-11-21 04:09:00.945093] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:01.248 [2024-11-21 04:09:00.947620] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.248 [2024-11-21 04:09:00.947679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.248 [2024-11-21 04:09:00.947733] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:01.248 [2024-11-21 04:09:00.947744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:11:01.248 { 00:11:01.248 "results": [ 00:11:01.248 { 00:11:01.248 "job": "raid_bdev1", 00:11:01.248 "core_mask": "0x1", 00:11:01.248 "workload": "randrw", 00:11:01.248 "percentage": 50, 00:11:01.248 "status": "finished", 00:11:01.248 "queue_depth": 1, 00:11:01.248 "io_size": 131072, 00:11:01.248 "runtime": 1.397535, 00:11:01.248 "iops": 14254.383611143907, 00:11:01.248 "mibps": 1781.7979513929884, 00:11:01.248 "io_failed": 1, 00:11:01.248 "io_timeout": 0, 00:11:01.248 "avg_latency_us": 98.5453946373389, 00:11:01.248 "min_latency_us": 24.929257641921396, 00:11:01.248 "max_latency_us": 1473.844541484716 00:11:01.248 } 00:11:01.248 ], 00:11:01.248 "core_count": 1 00:11:01.248 } 00:11:01.248 04:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.248 04:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83879 00:11:01.248 04:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 83879 ']' 00:11:01.248 04:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 83879 00:11:01.248 04:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:01.248 04:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.248 04:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83879 00:11:01.248 04:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.248 04:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.248 killing process with pid 83879 00:11:01.248 04:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83879' 00:11:01.248 04:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 83879 00:11:01.248 [2024-11-21 04:09:00.991168] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:01.248 04:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 83879 00:11:01.248 [2024-11-21 04:09:01.056759] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:01.508 04:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ulbZSMVIe4 00:11:01.508 04:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:01.508 04:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:01.508 04:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:01.508 04:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:01.508 04:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:01.508 04:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:01.508 04:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:01.508 00:11:01.508 real 0m3.579s 00:11:01.508 user 0m4.455s 00:11:01.508 sys 0m0.649s 00:11:01.508 04:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.508 04:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.508 ************************************ 00:11:01.508 END TEST raid_write_error_test 00:11:01.508 ************************************ 00:11:01.508 04:09:01 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:01.508 04:09:01 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:01.508 04:09:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:01.508 04:09:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.509 04:09:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:01.509 ************************************ 00:11:01.509 START TEST raid_state_function_test 00:11:01.509 ************************************ 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=84006 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:01.509 Process raid pid: 84006 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84006' 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 84006 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 84006 ']' 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.509 04:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.769 [2024-11-21 04:09:01.563434] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:11:01.769 [2024-11-21 04:09:01.563571] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.769 [2024-11-21 04:09:01.722153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.030 [2024-11-21 04:09:01.762540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.030 [2024-11-21 04:09:01.838723] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.030 [2024-11-21 04:09:01.838766] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.601 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.601 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:02.601 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:02.601 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.601 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.601 [2024-11-21 04:09:02.409755] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:02.601 [2024-11-21 04:09:02.409810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:02.601 [2024-11-21 04:09:02.409821] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:02.601 [2024-11-21 04:09:02.409830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:02.601 [2024-11-21 04:09:02.409836] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:02.601 [2024-11-21 04:09:02.409849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:02.601 [2024-11-21 04:09:02.409855] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:02.601 [2024-11-21 04:09:02.409864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:02.601 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.601 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:02.601 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.601 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.601 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.601 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.601 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.601 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.601 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.601 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.601 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.601 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.601 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.601 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.601 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.601 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.601 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.601 "name": "Existed_Raid", 00:11:02.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.601 "strip_size_kb": 0, 00:11:02.601 "state": "configuring", 00:11:02.601 "raid_level": "raid1", 00:11:02.601 "superblock": false, 00:11:02.601 "num_base_bdevs": 4, 00:11:02.601 "num_base_bdevs_discovered": 0, 00:11:02.601 "num_base_bdevs_operational": 4, 00:11:02.601 "base_bdevs_list": [ 00:11:02.601 { 00:11:02.601 "name": "BaseBdev1", 00:11:02.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.601 "is_configured": false, 00:11:02.601 "data_offset": 0, 00:11:02.601 "data_size": 0 00:11:02.601 }, 00:11:02.601 { 00:11:02.601 "name": "BaseBdev2", 00:11:02.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.601 "is_configured": false, 00:11:02.601 "data_offset": 0, 00:11:02.601 "data_size": 0 00:11:02.601 }, 00:11:02.601 { 00:11:02.601 "name": "BaseBdev3", 00:11:02.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.601 "is_configured": false, 00:11:02.601 "data_offset": 0, 00:11:02.601 "data_size": 0 00:11:02.601 }, 00:11:02.601 { 00:11:02.601 "name": "BaseBdev4", 00:11:02.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.601 "is_configured": false, 00:11:02.601 "data_offset": 0, 00:11:02.601 "data_size": 0 00:11:02.601 } 00:11:02.601 ] 00:11:02.601 }' 00:11:02.601 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.601 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.862 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:02.862 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.862 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.862 [2024-11-21 04:09:02.833010] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:02.862 [2024-11-21 04:09:02.833069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:11:03.122 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.122 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:03.122 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.122 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.122 [2024-11-21 04:09:02.844990] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:03.122 [2024-11-21 04:09:02.845034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:03.122 [2024-11-21 04:09:02.845043] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:03.122 [2024-11-21 04:09:02.845054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:03.122 [2024-11-21 04:09:02.845060] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:03.122 [2024-11-21 04:09:02.845070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:03.122 [2024-11-21 04:09:02.845076] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:03.122 [2024-11-21 04:09:02.845085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:03.122 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.122 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:03.122 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.122 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.122 [2024-11-21 04:09:02.872141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.122 BaseBdev1 00:11:03.122 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.122 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:03.122 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:03.122 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:03.122 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:03.122 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:03.122 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:03.122 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:03.122 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.122 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.122 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.122 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:03.122 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.122 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.122 [ 00:11:03.122 { 00:11:03.122 "name": "BaseBdev1", 00:11:03.122 "aliases": [ 00:11:03.122 "dd76b402-e23d-47c0-bb16-c40b9016fe45" 00:11:03.122 ], 00:11:03.122 "product_name": "Malloc disk", 00:11:03.122 "block_size": 512, 00:11:03.122 "num_blocks": 65536, 00:11:03.122 "uuid": "dd76b402-e23d-47c0-bb16-c40b9016fe45", 00:11:03.122 "assigned_rate_limits": { 00:11:03.122 "rw_ios_per_sec": 0, 00:11:03.122 "rw_mbytes_per_sec": 0, 00:11:03.122 "r_mbytes_per_sec": 0, 00:11:03.122 "w_mbytes_per_sec": 0 00:11:03.122 }, 00:11:03.122 "claimed": true, 00:11:03.122 "claim_type": "exclusive_write", 00:11:03.122 "zoned": false, 00:11:03.122 "supported_io_types": { 00:11:03.122 "read": true, 00:11:03.122 "write": true, 00:11:03.122 "unmap": true, 00:11:03.122 "flush": true, 00:11:03.122 "reset": true, 00:11:03.122 "nvme_admin": false, 00:11:03.122 "nvme_io": false, 00:11:03.122 "nvme_io_md": false, 00:11:03.122 "write_zeroes": true, 00:11:03.122 "zcopy": true, 00:11:03.122 "get_zone_info": false, 00:11:03.122 "zone_management": false, 00:11:03.122 "zone_append": false, 00:11:03.122 "compare": false, 00:11:03.122 "compare_and_write": false, 00:11:03.122 "abort": true, 00:11:03.122 "seek_hole": false, 00:11:03.122 "seek_data": false, 00:11:03.122 "copy": true, 00:11:03.122 "nvme_iov_md": false 00:11:03.122 }, 00:11:03.122 "memory_domains": [ 00:11:03.122 { 00:11:03.122 "dma_device_id": "system", 00:11:03.122 "dma_device_type": 1 00:11:03.122 }, 00:11:03.122 { 00:11:03.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.122 "dma_device_type": 2 00:11:03.122 } 00:11:03.122 ], 00:11:03.122 "driver_specific": {} 00:11:03.122 } 00:11:03.122 ] 00:11:03.122 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.122 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:03.122 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:03.122 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.123 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.123 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.123 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.123 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.123 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.123 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.123 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.123 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.123 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.123 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.123 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.123 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.123 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.123 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.123 "name": "Existed_Raid", 00:11:03.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.123 "strip_size_kb": 0, 00:11:03.123 "state": "configuring", 00:11:03.123 "raid_level": "raid1", 00:11:03.123 "superblock": false, 00:11:03.123 "num_base_bdevs": 4, 00:11:03.123 "num_base_bdevs_discovered": 1, 00:11:03.123 "num_base_bdevs_operational": 4, 00:11:03.123 "base_bdevs_list": [ 00:11:03.123 { 00:11:03.123 "name": "BaseBdev1", 00:11:03.123 "uuid": "dd76b402-e23d-47c0-bb16-c40b9016fe45", 00:11:03.123 "is_configured": true, 00:11:03.123 "data_offset": 0, 00:11:03.123 "data_size": 65536 00:11:03.123 }, 00:11:03.123 { 00:11:03.123 "name": "BaseBdev2", 00:11:03.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.123 "is_configured": false, 00:11:03.123 "data_offset": 0, 00:11:03.123 "data_size": 0 00:11:03.123 }, 00:11:03.123 { 00:11:03.123 "name": "BaseBdev3", 00:11:03.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.123 "is_configured": false, 00:11:03.123 "data_offset": 0, 00:11:03.123 "data_size": 0 00:11:03.123 }, 00:11:03.123 { 00:11:03.123 "name": "BaseBdev4", 00:11:03.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.123 "is_configured": false, 00:11:03.123 "data_offset": 0, 00:11:03.123 "data_size": 0 00:11:03.123 } 00:11:03.123 ] 00:11:03.123 }' 00:11:03.123 04:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.123 04:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.383 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:03.383 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.383 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.383 [2024-11-21 04:09:03.335402] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:03.383 [2024-11-21 04:09:03.335460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:11:03.383 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.383 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:03.383 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.383 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.383 [2024-11-21 04:09:03.347411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.383 [2024-11-21 04:09:03.349627] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:03.383 [2024-11-21 04:09:03.349664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:03.383 [2024-11-21 04:09:03.349674] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:03.383 [2024-11-21 04:09:03.349682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:03.383 [2024-11-21 04:09:03.349688] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:03.383 [2024-11-21 04:09:03.349696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:03.383 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.383 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:03.383 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:03.383 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:03.383 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.383 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.383 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.383 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.383 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.642 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.642 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.642 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.642 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.642 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.642 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.642 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.642 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.642 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.642 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.643 "name": "Existed_Raid", 00:11:03.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.643 "strip_size_kb": 0, 00:11:03.643 "state": "configuring", 00:11:03.643 "raid_level": "raid1", 00:11:03.643 "superblock": false, 00:11:03.643 "num_base_bdevs": 4, 00:11:03.643 "num_base_bdevs_discovered": 1, 00:11:03.643 "num_base_bdevs_operational": 4, 00:11:03.643 "base_bdevs_list": [ 00:11:03.643 { 00:11:03.643 "name": "BaseBdev1", 00:11:03.643 "uuid": "dd76b402-e23d-47c0-bb16-c40b9016fe45", 00:11:03.643 "is_configured": true, 00:11:03.643 "data_offset": 0, 00:11:03.643 "data_size": 65536 00:11:03.643 }, 00:11:03.643 { 00:11:03.643 "name": "BaseBdev2", 00:11:03.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.643 "is_configured": false, 00:11:03.643 "data_offset": 0, 00:11:03.643 "data_size": 0 00:11:03.643 }, 00:11:03.643 { 00:11:03.643 "name": "BaseBdev3", 00:11:03.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.643 "is_configured": false, 00:11:03.643 "data_offset": 0, 00:11:03.643 "data_size": 0 00:11:03.643 }, 00:11:03.643 { 00:11:03.643 "name": "BaseBdev4", 00:11:03.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.643 "is_configured": false, 00:11:03.643 "data_offset": 0, 00:11:03.643 "data_size": 0 00:11:03.643 } 00:11:03.643 ] 00:11:03.643 }' 00:11:03.643 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.643 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.903 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:03.903 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.903 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.903 [2024-11-21 04:09:03.815375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:03.903 BaseBdev2 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.904 [ 00:11:03.904 { 00:11:03.904 "name": "BaseBdev2", 00:11:03.904 "aliases": [ 00:11:03.904 "15d16e12-7201-4460-bdb2-f47402e830c9" 00:11:03.904 ], 00:11:03.904 "product_name": "Malloc disk", 00:11:03.904 "block_size": 512, 00:11:03.904 "num_blocks": 65536, 00:11:03.904 "uuid": "15d16e12-7201-4460-bdb2-f47402e830c9", 00:11:03.904 "assigned_rate_limits": { 00:11:03.904 "rw_ios_per_sec": 0, 00:11:03.904 "rw_mbytes_per_sec": 0, 00:11:03.904 "r_mbytes_per_sec": 0, 00:11:03.904 "w_mbytes_per_sec": 0 00:11:03.904 }, 00:11:03.904 "claimed": true, 00:11:03.904 "claim_type": "exclusive_write", 00:11:03.904 "zoned": false, 00:11:03.904 "supported_io_types": { 00:11:03.904 "read": true, 00:11:03.904 "write": true, 00:11:03.904 "unmap": true, 00:11:03.904 "flush": true, 00:11:03.904 "reset": true, 00:11:03.904 "nvme_admin": false, 00:11:03.904 "nvme_io": false, 00:11:03.904 "nvme_io_md": false, 00:11:03.904 "write_zeroes": true, 00:11:03.904 "zcopy": true, 00:11:03.904 "get_zone_info": false, 00:11:03.904 "zone_management": false, 00:11:03.904 "zone_append": false, 00:11:03.904 "compare": false, 00:11:03.904 "compare_and_write": false, 00:11:03.904 "abort": true, 00:11:03.904 "seek_hole": false, 00:11:03.904 "seek_data": false, 00:11:03.904 "copy": true, 00:11:03.904 "nvme_iov_md": false 00:11:03.904 }, 00:11:03.904 "memory_domains": [ 00:11:03.904 { 00:11:03.904 "dma_device_id": "system", 00:11:03.904 "dma_device_type": 1 00:11:03.904 }, 00:11:03.904 { 00:11:03.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.904 "dma_device_type": 2 00:11:03.904 } 00:11:03.904 ], 00:11:03.904 "driver_specific": {} 00:11:03.904 } 00:11:03.904 ] 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.904 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.164 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.164 "name": "Existed_Raid", 00:11:04.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.164 "strip_size_kb": 0, 00:11:04.164 "state": "configuring", 00:11:04.164 "raid_level": "raid1", 00:11:04.164 "superblock": false, 00:11:04.164 "num_base_bdevs": 4, 00:11:04.164 "num_base_bdevs_discovered": 2, 00:11:04.164 "num_base_bdevs_operational": 4, 00:11:04.164 "base_bdevs_list": [ 00:11:04.164 { 00:11:04.164 "name": "BaseBdev1", 00:11:04.164 "uuid": "dd76b402-e23d-47c0-bb16-c40b9016fe45", 00:11:04.164 "is_configured": true, 00:11:04.164 "data_offset": 0, 00:11:04.164 "data_size": 65536 00:11:04.164 }, 00:11:04.164 { 00:11:04.164 "name": "BaseBdev2", 00:11:04.164 "uuid": "15d16e12-7201-4460-bdb2-f47402e830c9", 00:11:04.164 "is_configured": true, 00:11:04.164 "data_offset": 0, 00:11:04.164 "data_size": 65536 00:11:04.164 }, 00:11:04.164 { 00:11:04.164 "name": "BaseBdev3", 00:11:04.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.164 "is_configured": false, 00:11:04.164 "data_offset": 0, 00:11:04.164 "data_size": 0 00:11:04.164 }, 00:11:04.164 { 00:11:04.164 "name": "BaseBdev4", 00:11:04.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.164 "is_configured": false, 00:11:04.164 "data_offset": 0, 00:11:04.164 "data_size": 0 00:11:04.164 } 00:11:04.164 ] 00:11:04.164 }' 00:11:04.164 04:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.164 04:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.425 [2024-11-21 04:09:04.338588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:04.425 BaseBdev3 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.425 [ 00:11:04.425 { 00:11:04.425 "name": "BaseBdev3", 00:11:04.425 "aliases": [ 00:11:04.425 "006c47f6-0273-4922-a02a-7a3d7dc54162" 00:11:04.425 ], 00:11:04.425 "product_name": "Malloc disk", 00:11:04.425 "block_size": 512, 00:11:04.425 "num_blocks": 65536, 00:11:04.425 "uuid": "006c47f6-0273-4922-a02a-7a3d7dc54162", 00:11:04.425 "assigned_rate_limits": { 00:11:04.425 "rw_ios_per_sec": 0, 00:11:04.425 "rw_mbytes_per_sec": 0, 00:11:04.425 "r_mbytes_per_sec": 0, 00:11:04.425 "w_mbytes_per_sec": 0 00:11:04.425 }, 00:11:04.425 "claimed": true, 00:11:04.425 "claim_type": "exclusive_write", 00:11:04.425 "zoned": false, 00:11:04.425 "supported_io_types": { 00:11:04.425 "read": true, 00:11:04.425 "write": true, 00:11:04.425 "unmap": true, 00:11:04.425 "flush": true, 00:11:04.425 "reset": true, 00:11:04.425 "nvme_admin": false, 00:11:04.425 "nvme_io": false, 00:11:04.425 "nvme_io_md": false, 00:11:04.425 "write_zeroes": true, 00:11:04.425 "zcopy": true, 00:11:04.425 "get_zone_info": false, 00:11:04.425 "zone_management": false, 00:11:04.425 "zone_append": false, 00:11:04.425 "compare": false, 00:11:04.425 "compare_and_write": false, 00:11:04.425 "abort": true, 00:11:04.425 "seek_hole": false, 00:11:04.425 "seek_data": false, 00:11:04.425 "copy": true, 00:11:04.425 "nvme_iov_md": false 00:11:04.425 }, 00:11:04.425 "memory_domains": [ 00:11:04.425 { 00:11:04.425 "dma_device_id": "system", 00:11:04.425 "dma_device_type": 1 00:11:04.425 }, 00:11:04.425 { 00:11:04.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.425 "dma_device_type": 2 00:11:04.425 } 00:11:04.425 ], 00:11:04.425 "driver_specific": {} 00:11:04.425 } 00:11:04.425 ] 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.425 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.685 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.685 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.685 "name": "Existed_Raid", 00:11:04.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.685 "strip_size_kb": 0, 00:11:04.685 "state": "configuring", 00:11:04.685 "raid_level": "raid1", 00:11:04.685 "superblock": false, 00:11:04.685 "num_base_bdevs": 4, 00:11:04.685 "num_base_bdevs_discovered": 3, 00:11:04.685 "num_base_bdevs_operational": 4, 00:11:04.685 "base_bdevs_list": [ 00:11:04.685 { 00:11:04.685 "name": "BaseBdev1", 00:11:04.685 "uuid": "dd76b402-e23d-47c0-bb16-c40b9016fe45", 00:11:04.685 "is_configured": true, 00:11:04.685 "data_offset": 0, 00:11:04.685 "data_size": 65536 00:11:04.685 }, 00:11:04.685 { 00:11:04.685 "name": "BaseBdev2", 00:11:04.685 "uuid": "15d16e12-7201-4460-bdb2-f47402e830c9", 00:11:04.685 "is_configured": true, 00:11:04.685 "data_offset": 0, 00:11:04.685 "data_size": 65536 00:11:04.685 }, 00:11:04.685 { 00:11:04.685 "name": "BaseBdev3", 00:11:04.685 "uuid": "006c47f6-0273-4922-a02a-7a3d7dc54162", 00:11:04.685 "is_configured": true, 00:11:04.685 "data_offset": 0, 00:11:04.685 "data_size": 65536 00:11:04.685 }, 00:11:04.685 { 00:11:04.685 "name": "BaseBdev4", 00:11:04.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.685 "is_configured": false, 00:11:04.685 "data_offset": 0, 00:11:04.685 "data_size": 0 00:11:04.685 } 00:11:04.685 ] 00:11:04.685 }' 00:11:04.685 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.685 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.945 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:04.945 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.945 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.945 [2024-11-21 04:09:04.818576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:04.945 [2024-11-21 04:09:04.818641] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:11:04.945 [2024-11-21 04:09:04.818652] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:04.945 [2024-11-21 04:09:04.818997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:11:04.945 [2024-11-21 04:09:04.819170] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:11:04.945 [2024-11-21 04:09:04.819184] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:11:04.945 [2024-11-21 04:09:04.819461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.945 BaseBdev4 00:11:04.945 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.945 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:04.945 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:04.945 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:04.945 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:04.945 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:04.945 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:04.945 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:04.945 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.945 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.945 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.945 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:04.946 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.946 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.946 [ 00:11:04.946 { 00:11:04.946 "name": "BaseBdev4", 00:11:04.946 "aliases": [ 00:11:04.946 "d31202da-cb23-4f0b-9051-61c51c2af640" 00:11:04.946 ], 00:11:04.946 "product_name": "Malloc disk", 00:11:04.946 "block_size": 512, 00:11:04.946 "num_blocks": 65536, 00:11:04.946 "uuid": "d31202da-cb23-4f0b-9051-61c51c2af640", 00:11:04.946 "assigned_rate_limits": { 00:11:04.946 "rw_ios_per_sec": 0, 00:11:04.946 "rw_mbytes_per_sec": 0, 00:11:04.946 "r_mbytes_per_sec": 0, 00:11:04.946 "w_mbytes_per_sec": 0 00:11:04.946 }, 00:11:04.946 "claimed": true, 00:11:04.946 "claim_type": "exclusive_write", 00:11:04.946 "zoned": false, 00:11:04.946 "supported_io_types": { 00:11:04.946 "read": true, 00:11:04.946 "write": true, 00:11:04.946 "unmap": true, 00:11:04.946 "flush": true, 00:11:04.946 "reset": true, 00:11:04.946 "nvme_admin": false, 00:11:04.946 "nvme_io": false, 00:11:04.946 "nvme_io_md": false, 00:11:04.946 "write_zeroes": true, 00:11:04.946 "zcopy": true, 00:11:04.946 "get_zone_info": false, 00:11:04.946 "zone_management": false, 00:11:04.946 "zone_append": false, 00:11:04.946 "compare": false, 00:11:04.946 "compare_and_write": false, 00:11:04.946 "abort": true, 00:11:04.946 "seek_hole": false, 00:11:04.946 "seek_data": false, 00:11:04.946 "copy": true, 00:11:04.946 "nvme_iov_md": false 00:11:04.946 }, 00:11:04.946 "memory_domains": [ 00:11:04.946 { 00:11:04.946 "dma_device_id": "system", 00:11:04.946 "dma_device_type": 1 00:11:04.946 }, 00:11:04.946 { 00:11:04.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.946 "dma_device_type": 2 00:11:04.946 } 00:11:04.946 ], 00:11:04.946 "driver_specific": {} 00:11:04.946 } 00:11:04.946 ] 00:11:04.946 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.946 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:04.946 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:04.946 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:04.946 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:04.946 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.946 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.946 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.946 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.946 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.946 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.946 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.946 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.946 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.946 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.946 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.946 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.946 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.946 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.946 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.946 "name": "Existed_Raid", 00:11:04.946 "uuid": "0793ef56-5517-4b95-9438-211b9002cbf6", 00:11:04.946 "strip_size_kb": 0, 00:11:04.946 "state": "online", 00:11:04.946 "raid_level": "raid1", 00:11:04.946 "superblock": false, 00:11:04.946 "num_base_bdevs": 4, 00:11:04.946 "num_base_bdevs_discovered": 4, 00:11:04.946 "num_base_bdevs_operational": 4, 00:11:04.946 "base_bdevs_list": [ 00:11:04.946 { 00:11:04.946 "name": "BaseBdev1", 00:11:04.946 "uuid": "dd76b402-e23d-47c0-bb16-c40b9016fe45", 00:11:04.946 "is_configured": true, 00:11:04.946 "data_offset": 0, 00:11:04.946 "data_size": 65536 00:11:04.946 }, 00:11:04.946 { 00:11:04.946 "name": "BaseBdev2", 00:11:04.946 "uuid": "15d16e12-7201-4460-bdb2-f47402e830c9", 00:11:04.946 "is_configured": true, 00:11:04.946 "data_offset": 0, 00:11:04.946 "data_size": 65536 00:11:04.946 }, 00:11:04.946 { 00:11:04.946 "name": "BaseBdev3", 00:11:04.946 "uuid": "006c47f6-0273-4922-a02a-7a3d7dc54162", 00:11:04.946 "is_configured": true, 00:11:04.946 "data_offset": 0, 00:11:04.946 "data_size": 65536 00:11:04.946 }, 00:11:04.946 { 00:11:04.946 "name": "BaseBdev4", 00:11:04.946 "uuid": "d31202da-cb23-4f0b-9051-61c51c2af640", 00:11:04.946 "is_configured": true, 00:11:04.946 "data_offset": 0, 00:11:04.946 "data_size": 65536 00:11:04.946 } 00:11:04.946 ] 00:11:04.946 }' 00:11:04.946 04:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.946 04:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.515 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:05.515 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:05.515 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:05.515 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:05.515 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:05.515 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:05.515 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:05.515 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:05.515 04:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.515 04:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.515 [2024-11-21 04:09:05.314210] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.515 04:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.515 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:05.515 "name": "Existed_Raid", 00:11:05.515 "aliases": [ 00:11:05.515 "0793ef56-5517-4b95-9438-211b9002cbf6" 00:11:05.515 ], 00:11:05.515 "product_name": "Raid Volume", 00:11:05.515 "block_size": 512, 00:11:05.515 "num_blocks": 65536, 00:11:05.515 "uuid": "0793ef56-5517-4b95-9438-211b9002cbf6", 00:11:05.515 "assigned_rate_limits": { 00:11:05.515 "rw_ios_per_sec": 0, 00:11:05.515 "rw_mbytes_per_sec": 0, 00:11:05.515 "r_mbytes_per_sec": 0, 00:11:05.515 "w_mbytes_per_sec": 0 00:11:05.515 }, 00:11:05.515 "claimed": false, 00:11:05.515 "zoned": false, 00:11:05.515 "supported_io_types": { 00:11:05.515 "read": true, 00:11:05.515 "write": true, 00:11:05.515 "unmap": false, 00:11:05.515 "flush": false, 00:11:05.515 "reset": true, 00:11:05.515 "nvme_admin": false, 00:11:05.515 "nvme_io": false, 00:11:05.515 "nvme_io_md": false, 00:11:05.515 "write_zeroes": true, 00:11:05.515 "zcopy": false, 00:11:05.515 "get_zone_info": false, 00:11:05.515 "zone_management": false, 00:11:05.515 "zone_append": false, 00:11:05.515 "compare": false, 00:11:05.515 "compare_and_write": false, 00:11:05.515 "abort": false, 00:11:05.515 "seek_hole": false, 00:11:05.515 "seek_data": false, 00:11:05.515 "copy": false, 00:11:05.515 "nvme_iov_md": false 00:11:05.515 }, 00:11:05.515 "memory_domains": [ 00:11:05.515 { 00:11:05.515 "dma_device_id": "system", 00:11:05.515 "dma_device_type": 1 00:11:05.515 }, 00:11:05.515 { 00:11:05.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.515 "dma_device_type": 2 00:11:05.515 }, 00:11:05.515 { 00:11:05.515 "dma_device_id": "system", 00:11:05.515 "dma_device_type": 1 00:11:05.515 }, 00:11:05.515 { 00:11:05.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.515 "dma_device_type": 2 00:11:05.515 }, 00:11:05.515 { 00:11:05.515 "dma_device_id": "system", 00:11:05.515 "dma_device_type": 1 00:11:05.515 }, 00:11:05.515 { 00:11:05.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.515 "dma_device_type": 2 00:11:05.515 }, 00:11:05.515 { 00:11:05.515 "dma_device_id": "system", 00:11:05.515 "dma_device_type": 1 00:11:05.515 }, 00:11:05.515 { 00:11:05.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.515 "dma_device_type": 2 00:11:05.515 } 00:11:05.515 ], 00:11:05.515 "driver_specific": { 00:11:05.515 "raid": { 00:11:05.515 "uuid": "0793ef56-5517-4b95-9438-211b9002cbf6", 00:11:05.515 "strip_size_kb": 0, 00:11:05.515 "state": "online", 00:11:05.515 "raid_level": "raid1", 00:11:05.515 "superblock": false, 00:11:05.515 "num_base_bdevs": 4, 00:11:05.515 "num_base_bdevs_discovered": 4, 00:11:05.515 "num_base_bdevs_operational": 4, 00:11:05.515 "base_bdevs_list": [ 00:11:05.515 { 00:11:05.515 "name": "BaseBdev1", 00:11:05.515 "uuid": "dd76b402-e23d-47c0-bb16-c40b9016fe45", 00:11:05.515 "is_configured": true, 00:11:05.515 "data_offset": 0, 00:11:05.515 "data_size": 65536 00:11:05.515 }, 00:11:05.515 { 00:11:05.515 "name": "BaseBdev2", 00:11:05.515 "uuid": "15d16e12-7201-4460-bdb2-f47402e830c9", 00:11:05.515 "is_configured": true, 00:11:05.515 "data_offset": 0, 00:11:05.516 "data_size": 65536 00:11:05.516 }, 00:11:05.516 { 00:11:05.516 "name": "BaseBdev3", 00:11:05.516 "uuid": "006c47f6-0273-4922-a02a-7a3d7dc54162", 00:11:05.516 "is_configured": true, 00:11:05.516 "data_offset": 0, 00:11:05.516 "data_size": 65536 00:11:05.516 }, 00:11:05.516 { 00:11:05.516 "name": "BaseBdev4", 00:11:05.516 "uuid": "d31202da-cb23-4f0b-9051-61c51c2af640", 00:11:05.516 "is_configured": true, 00:11:05.516 "data_offset": 0, 00:11:05.516 "data_size": 65536 00:11:05.516 } 00:11:05.516 ] 00:11:05.516 } 00:11:05.516 } 00:11:05.516 }' 00:11:05.516 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:05.516 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:05.516 BaseBdev2 00:11:05.516 BaseBdev3 00:11:05.516 BaseBdev4' 00:11:05.516 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.516 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:05.516 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.516 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:05.516 04:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.516 04:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.516 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.516 04:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.516 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.516 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.516 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.516 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.775 [2024-11-21 04:09:05.637469] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:05.775 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.776 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.776 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.776 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.776 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:05.776 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.776 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.776 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.776 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.776 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.776 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.776 04:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.776 04:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.776 04:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.776 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.776 "name": "Existed_Raid", 00:11:05.776 "uuid": "0793ef56-5517-4b95-9438-211b9002cbf6", 00:11:05.776 "strip_size_kb": 0, 00:11:05.776 "state": "online", 00:11:05.776 "raid_level": "raid1", 00:11:05.776 "superblock": false, 00:11:05.776 "num_base_bdevs": 4, 00:11:05.776 "num_base_bdevs_discovered": 3, 00:11:05.776 "num_base_bdevs_operational": 3, 00:11:05.776 "base_bdevs_list": [ 00:11:05.776 { 00:11:05.776 "name": null, 00:11:05.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.776 "is_configured": false, 00:11:05.776 "data_offset": 0, 00:11:05.776 "data_size": 65536 00:11:05.776 }, 00:11:05.776 { 00:11:05.776 "name": "BaseBdev2", 00:11:05.776 "uuid": "15d16e12-7201-4460-bdb2-f47402e830c9", 00:11:05.776 "is_configured": true, 00:11:05.776 "data_offset": 0, 00:11:05.776 "data_size": 65536 00:11:05.776 }, 00:11:05.776 { 00:11:05.776 "name": "BaseBdev3", 00:11:05.776 "uuid": "006c47f6-0273-4922-a02a-7a3d7dc54162", 00:11:05.776 "is_configured": true, 00:11:05.776 "data_offset": 0, 00:11:05.776 "data_size": 65536 00:11:05.776 }, 00:11:05.776 { 00:11:05.776 "name": "BaseBdev4", 00:11:05.776 "uuid": "d31202da-cb23-4f0b-9051-61c51c2af640", 00:11:05.776 "is_configured": true, 00:11:05.776 "data_offset": 0, 00:11:05.776 "data_size": 65536 00:11:05.776 } 00:11:05.776 ] 00:11:05.776 }' 00:11:05.776 04:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.776 04:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.342 [2024-11-21 04:09:06.165489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.342 [2024-11-21 04:09:06.246365] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.342 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.342 [2024-11-21 04:09:06.302145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:06.342 [2024-11-21 04:09:06.302316] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.602 [2024-11-21 04:09:06.323660] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.602 [2024-11-21 04:09:06.323778] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.602 [2024-11-21 04:09:06.323806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.602 BaseBdev2 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.602 [ 00:11:06.602 { 00:11:06.602 "name": "BaseBdev2", 00:11:06.602 "aliases": [ 00:11:06.602 "1efb8696-42fe-48e7-b1db-4ceecea1354d" 00:11:06.602 ], 00:11:06.602 "product_name": "Malloc disk", 00:11:06.602 "block_size": 512, 00:11:06.602 "num_blocks": 65536, 00:11:06.602 "uuid": "1efb8696-42fe-48e7-b1db-4ceecea1354d", 00:11:06.602 "assigned_rate_limits": { 00:11:06.602 "rw_ios_per_sec": 0, 00:11:06.602 "rw_mbytes_per_sec": 0, 00:11:06.602 "r_mbytes_per_sec": 0, 00:11:06.602 "w_mbytes_per_sec": 0 00:11:06.602 }, 00:11:06.602 "claimed": false, 00:11:06.602 "zoned": false, 00:11:06.602 "supported_io_types": { 00:11:06.602 "read": true, 00:11:06.602 "write": true, 00:11:06.602 "unmap": true, 00:11:06.602 "flush": true, 00:11:06.602 "reset": true, 00:11:06.602 "nvme_admin": false, 00:11:06.602 "nvme_io": false, 00:11:06.602 "nvme_io_md": false, 00:11:06.602 "write_zeroes": true, 00:11:06.602 "zcopy": true, 00:11:06.602 "get_zone_info": false, 00:11:06.602 "zone_management": false, 00:11:06.602 "zone_append": false, 00:11:06.602 "compare": false, 00:11:06.602 "compare_and_write": false, 00:11:06.602 "abort": true, 00:11:06.602 "seek_hole": false, 00:11:06.602 "seek_data": false, 00:11:06.602 "copy": true, 00:11:06.602 "nvme_iov_md": false 00:11:06.602 }, 00:11:06.602 "memory_domains": [ 00:11:06.602 { 00:11:06.602 "dma_device_id": "system", 00:11:06.602 "dma_device_type": 1 00:11:06.602 }, 00:11:06.602 { 00:11:06.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.602 "dma_device_type": 2 00:11:06.602 } 00:11:06.602 ], 00:11:06.602 "driver_specific": {} 00:11:06.602 } 00:11:06.602 ] 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.602 BaseBdev3 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.602 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.602 [ 00:11:06.602 { 00:11:06.602 "name": "BaseBdev3", 00:11:06.602 "aliases": [ 00:11:06.602 "b7532961-e982-4959-b357-9595d62e1300" 00:11:06.602 ], 00:11:06.602 "product_name": "Malloc disk", 00:11:06.602 "block_size": 512, 00:11:06.602 "num_blocks": 65536, 00:11:06.602 "uuid": "b7532961-e982-4959-b357-9595d62e1300", 00:11:06.602 "assigned_rate_limits": { 00:11:06.602 "rw_ios_per_sec": 0, 00:11:06.602 "rw_mbytes_per_sec": 0, 00:11:06.602 "r_mbytes_per_sec": 0, 00:11:06.602 "w_mbytes_per_sec": 0 00:11:06.602 }, 00:11:06.602 "claimed": false, 00:11:06.602 "zoned": false, 00:11:06.602 "supported_io_types": { 00:11:06.602 "read": true, 00:11:06.602 "write": true, 00:11:06.602 "unmap": true, 00:11:06.602 "flush": true, 00:11:06.602 "reset": true, 00:11:06.602 "nvme_admin": false, 00:11:06.602 "nvme_io": false, 00:11:06.602 "nvme_io_md": false, 00:11:06.602 "write_zeroes": true, 00:11:06.602 "zcopy": true, 00:11:06.602 "get_zone_info": false, 00:11:06.602 "zone_management": false, 00:11:06.602 "zone_append": false, 00:11:06.602 "compare": false, 00:11:06.602 "compare_and_write": false, 00:11:06.602 "abort": true, 00:11:06.602 "seek_hole": false, 00:11:06.602 "seek_data": false, 00:11:06.602 "copy": true, 00:11:06.602 "nvme_iov_md": false 00:11:06.602 }, 00:11:06.602 "memory_domains": [ 00:11:06.602 { 00:11:06.602 "dma_device_id": "system", 00:11:06.602 "dma_device_type": 1 00:11:06.603 }, 00:11:06.603 { 00:11:06.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.603 "dma_device_type": 2 00:11:06.603 } 00:11:06.603 ], 00:11:06.603 "driver_specific": {} 00:11:06.603 } 00:11:06.603 ] 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.603 BaseBdev4 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.603 [ 00:11:06.603 { 00:11:06.603 "name": "BaseBdev4", 00:11:06.603 "aliases": [ 00:11:06.603 "fed549fd-f068-4f68-9f25-035e9e85efe8" 00:11:06.603 ], 00:11:06.603 "product_name": "Malloc disk", 00:11:06.603 "block_size": 512, 00:11:06.603 "num_blocks": 65536, 00:11:06.603 "uuid": "fed549fd-f068-4f68-9f25-035e9e85efe8", 00:11:06.603 "assigned_rate_limits": { 00:11:06.603 "rw_ios_per_sec": 0, 00:11:06.603 "rw_mbytes_per_sec": 0, 00:11:06.603 "r_mbytes_per_sec": 0, 00:11:06.603 "w_mbytes_per_sec": 0 00:11:06.603 }, 00:11:06.603 "claimed": false, 00:11:06.603 "zoned": false, 00:11:06.603 "supported_io_types": { 00:11:06.603 "read": true, 00:11:06.603 "write": true, 00:11:06.603 "unmap": true, 00:11:06.603 "flush": true, 00:11:06.603 "reset": true, 00:11:06.603 "nvme_admin": false, 00:11:06.603 "nvme_io": false, 00:11:06.603 "nvme_io_md": false, 00:11:06.603 "write_zeroes": true, 00:11:06.603 "zcopy": true, 00:11:06.603 "get_zone_info": false, 00:11:06.603 "zone_management": false, 00:11:06.603 "zone_append": false, 00:11:06.603 "compare": false, 00:11:06.603 "compare_and_write": false, 00:11:06.603 "abort": true, 00:11:06.603 "seek_hole": false, 00:11:06.603 "seek_data": false, 00:11:06.603 "copy": true, 00:11:06.603 "nvme_iov_md": false 00:11:06.603 }, 00:11:06.603 "memory_domains": [ 00:11:06.603 { 00:11:06.603 "dma_device_id": "system", 00:11:06.603 "dma_device_type": 1 00:11:06.603 }, 00:11:06.603 { 00:11:06.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.603 "dma_device_type": 2 00:11:06.603 } 00:11:06.603 ], 00:11:06.603 "driver_specific": {} 00:11:06.603 } 00:11:06.603 ] 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.603 [2024-11-21 04:09:06.561781] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:06.603 [2024-11-21 04:09:06.561910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:06.603 [2024-11-21 04:09:06.561958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:06.603 [2024-11-21 04:09:06.564160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:06.603 [2024-11-21 04:09:06.564260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.603 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.863 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.863 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.863 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.863 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.863 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.863 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.863 "name": "Existed_Raid", 00:11:06.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.863 "strip_size_kb": 0, 00:11:06.863 "state": "configuring", 00:11:06.863 "raid_level": "raid1", 00:11:06.863 "superblock": false, 00:11:06.863 "num_base_bdevs": 4, 00:11:06.863 "num_base_bdevs_discovered": 3, 00:11:06.863 "num_base_bdevs_operational": 4, 00:11:06.863 "base_bdevs_list": [ 00:11:06.863 { 00:11:06.863 "name": "BaseBdev1", 00:11:06.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.863 "is_configured": false, 00:11:06.863 "data_offset": 0, 00:11:06.863 "data_size": 0 00:11:06.863 }, 00:11:06.863 { 00:11:06.863 "name": "BaseBdev2", 00:11:06.863 "uuid": "1efb8696-42fe-48e7-b1db-4ceecea1354d", 00:11:06.863 "is_configured": true, 00:11:06.863 "data_offset": 0, 00:11:06.863 "data_size": 65536 00:11:06.863 }, 00:11:06.863 { 00:11:06.863 "name": "BaseBdev3", 00:11:06.863 "uuid": "b7532961-e982-4959-b357-9595d62e1300", 00:11:06.863 "is_configured": true, 00:11:06.863 "data_offset": 0, 00:11:06.863 "data_size": 65536 00:11:06.863 }, 00:11:06.863 { 00:11:06.863 "name": "BaseBdev4", 00:11:06.863 "uuid": "fed549fd-f068-4f68-9f25-035e9e85efe8", 00:11:06.863 "is_configured": true, 00:11:06.863 "data_offset": 0, 00:11:06.863 "data_size": 65536 00:11:06.863 } 00:11:06.863 ] 00:11:06.863 }' 00:11:06.863 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.863 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.122 04:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:07.122 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.122 04:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.122 [2024-11-21 04:09:06.997084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:07.122 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.122 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:07.122 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.122 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.122 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.122 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.122 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.122 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.122 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.122 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.122 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.122 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.122 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.122 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.122 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.122 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.122 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.122 "name": "Existed_Raid", 00:11:07.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.122 "strip_size_kb": 0, 00:11:07.122 "state": "configuring", 00:11:07.122 "raid_level": "raid1", 00:11:07.122 "superblock": false, 00:11:07.122 "num_base_bdevs": 4, 00:11:07.122 "num_base_bdevs_discovered": 2, 00:11:07.122 "num_base_bdevs_operational": 4, 00:11:07.122 "base_bdevs_list": [ 00:11:07.122 { 00:11:07.122 "name": "BaseBdev1", 00:11:07.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.122 "is_configured": false, 00:11:07.122 "data_offset": 0, 00:11:07.122 "data_size": 0 00:11:07.122 }, 00:11:07.122 { 00:11:07.122 "name": null, 00:11:07.122 "uuid": "1efb8696-42fe-48e7-b1db-4ceecea1354d", 00:11:07.122 "is_configured": false, 00:11:07.122 "data_offset": 0, 00:11:07.122 "data_size": 65536 00:11:07.122 }, 00:11:07.122 { 00:11:07.122 "name": "BaseBdev3", 00:11:07.122 "uuid": "b7532961-e982-4959-b357-9595d62e1300", 00:11:07.122 "is_configured": true, 00:11:07.122 "data_offset": 0, 00:11:07.122 "data_size": 65536 00:11:07.122 }, 00:11:07.122 { 00:11:07.122 "name": "BaseBdev4", 00:11:07.122 "uuid": "fed549fd-f068-4f68-9f25-035e9e85efe8", 00:11:07.122 "is_configured": true, 00:11:07.122 "data_offset": 0, 00:11:07.122 "data_size": 65536 00:11:07.122 } 00:11:07.122 ] 00:11:07.122 }' 00:11:07.122 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.122 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.692 [2024-11-21 04:09:07.453192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:07.692 BaseBdev1 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.692 [ 00:11:07.692 { 00:11:07.692 "name": "BaseBdev1", 00:11:07.692 "aliases": [ 00:11:07.692 "c17b9e10-c3a6-4124-9205-7e45f3aa65bc" 00:11:07.692 ], 00:11:07.692 "product_name": "Malloc disk", 00:11:07.692 "block_size": 512, 00:11:07.692 "num_blocks": 65536, 00:11:07.692 "uuid": "c17b9e10-c3a6-4124-9205-7e45f3aa65bc", 00:11:07.692 "assigned_rate_limits": { 00:11:07.692 "rw_ios_per_sec": 0, 00:11:07.692 "rw_mbytes_per_sec": 0, 00:11:07.692 "r_mbytes_per_sec": 0, 00:11:07.692 "w_mbytes_per_sec": 0 00:11:07.692 }, 00:11:07.692 "claimed": true, 00:11:07.692 "claim_type": "exclusive_write", 00:11:07.692 "zoned": false, 00:11:07.692 "supported_io_types": { 00:11:07.692 "read": true, 00:11:07.692 "write": true, 00:11:07.692 "unmap": true, 00:11:07.692 "flush": true, 00:11:07.692 "reset": true, 00:11:07.692 "nvme_admin": false, 00:11:07.692 "nvme_io": false, 00:11:07.692 "nvme_io_md": false, 00:11:07.692 "write_zeroes": true, 00:11:07.692 "zcopy": true, 00:11:07.692 "get_zone_info": false, 00:11:07.692 "zone_management": false, 00:11:07.692 "zone_append": false, 00:11:07.692 "compare": false, 00:11:07.692 "compare_and_write": false, 00:11:07.692 "abort": true, 00:11:07.692 "seek_hole": false, 00:11:07.692 "seek_data": false, 00:11:07.692 "copy": true, 00:11:07.692 "nvme_iov_md": false 00:11:07.692 }, 00:11:07.692 "memory_domains": [ 00:11:07.692 { 00:11:07.692 "dma_device_id": "system", 00:11:07.692 "dma_device_type": 1 00:11:07.692 }, 00:11:07.692 { 00:11:07.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.692 "dma_device_type": 2 00:11:07.692 } 00:11:07.692 ], 00:11:07.692 "driver_specific": {} 00:11:07.692 } 00:11:07.692 ] 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.692 "name": "Existed_Raid", 00:11:07.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.692 "strip_size_kb": 0, 00:11:07.692 "state": "configuring", 00:11:07.692 "raid_level": "raid1", 00:11:07.692 "superblock": false, 00:11:07.692 "num_base_bdevs": 4, 00:11:07.692 "num_base_bdevs_discovered": 3, 00:11:07.692 "num_base_bdevs_operational": 4, 00:11:07.692 "base_bdevs_list": [ 00:11:07.692 { 00:11:07.692 "name": "BaseBdev1", 00:11:07.692 "uuid": "c17b9e10-c3a6-4124-9205-7e45f3aa65bc", 00:11:07.692 "is_configured": true, 00:11:07.692 "data_offset": 0, 00:11:07.692 "data_size": 65536 00:11:07.692 }, 00:11:07.692 { 00:11:07.692 "name": null, 00:11:07.692 "uuid": "1efb8696-42fe-48e7-b1db-4ceecea1354d", 00:11:07.692 "is_configured": false, 00:11:07.692 "data_offset": 0, 00:11:07.692 "data_size": 65536 00:11:07.692 }, 00:11:07.692 { 00:11:07.692 "name": "BaseBdev3", 00:11:07.692 "uuid": "b7532961-e982-4959-b357-9595d62e1300", 00:11:07.692 "is_configured": true, 00:11:07.692 "data_offset": 0, 00:11:07.692 "data_size": 65536 00:11:07.692 }, 00:11:07.692 { 00:11:07.692 "name": "BaseBdev4", 00:11:07.692 "uuid": "fed549fd-f068-4f68-9f25-035e9e85efe8", 00:11:07.692 "is_configured": true, 00:11:07.692 "data_offset": 0, 00:11:07.692 "data_size": 65536 00:11:07.692 } 00:11:07.692 ] 00:11:07.692 }' 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.692 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.952 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:07.952 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.952 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.952 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.212 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.212 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:08.212 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:08.212 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.212 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.212 [2024-11-21 04:09:07.948433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:08.212 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.212 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:08.212 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.212 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.212 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.212 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.212 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.212 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.212 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.212 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.212 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.212 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.212 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.212 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.212 04:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.212 04:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.212 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.212 "name": "Existed_Raid", 00:11:08.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.212 "strip_size_kb": 0, 00:11:08.212 "state": "configuring", 00:11:08.212 "raid_level": "raid1", 00:11:08.212 "superblock": false, 00:11:08.212 "num_base_bdevs": 4, 00:11:08.212 "num_base_bdevs_discovered": 2, 00:11:08.212 "num_base_bdevs_operational": 4, 00:11:08.212 "base_bdevs_list": [ 00:11:08.212 { 00:11:08.213 "name": "BaseBdev1", 00:11:08.213 "uuid": "c17b9e10-c3a6-4124-9205-7e45f3aa65bc", 00:11:08.213 "is_configured": true, 00:11:08.213 "data_offset": 0, 00:11:08.213 "data_size": 65536 00:11:08.213 }, 00:11:08.213 { 00:11:08.213 "name": null, 00:11:08.213 "uuid": "1efb8696-42fe-48e7-b1db-4ceecea1354d", 00:11:08.213 "is_configured": false, 00:11:08.213 "data_offset": 0, 00:11:08.213 "data_size": 65536 00:11:08.213 }, 00:11:08.213 { 00:11:08.213 "name": null, 00:11:08.213 "uuid": "b7532961-e982-4959-b357-9595d62e1300", 00:11:08.213 "is_configured": false, 00:11:08.213 "data_offset": 0, 00:11:08.213 "data_size": 65536 00:11:08.213 }, 00:11:08.213 { 00:11:08.213 "name": "BaseBdev4", 00:11:08.213 "uuid": "fed549fd-f068-4f68-9f25-035e9e85efe8", 00:11:08.213 "is_configured": true, 00:11:08.213 "data_offset": 0, 00:11:08.213 "data_size": 65536 00:11:08.213 } 00:11:08.213 ] 00:11:08.213 }' 00:11:08.213 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.213 04:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.473 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.473 04:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.473 04:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.473 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:08.473 04:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.473 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:08.473 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:08.473 04:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.473 04:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.733 [2024-11-21 04:09:08.447601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:08.733 04:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.733 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:08.733 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.733 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.733 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.733 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.733 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.733 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.733 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.733 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.733 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.733 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.733 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.733 04:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.733 04:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.733 04:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.733 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.733 "name": "Existed_Raid", 00:11:08.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.733 "strip_size_kb": 0, 00:11:08.733 "state": "configuring", 00:11:08.733 "raid_level": "raid1", 00:11:08.733 "superblock": false, 00:11:08.733 "num_base_bdevs": 4, 00:11:08.733 "num_base_bdevs_discovered": 3, 00:11:08.733 "num_base_bdevs_operational": 4, 00:11:08.733 "base_bdevs_list": [ 00:11:08.733 { 00:11:08.733 "name": "BaseBdev1", 00:11:08.733 "uuid": "c17b9e10-c3a6-4124-9205-7e45f3aa65bc", 00:11:08.733 "is_configured": true, 00:11:08.733 "data_offset": 0, 00:11:08.733 "data_size": 65536 00:11:08.733 }, 00:11:08.733 { 00:11:08.733 "name": null, 00:11:08.733 "uuid": "1efb8696-42fe-48e7-b1db-4ceecea1354d", 00:11:08.733 "is_configured": false, 00:11:08.733 "data_offset": 0, 00:11:08.733 "data_size": 65536 00:11:08.733 }, 00:11:08.733 { 00:11:08.733 "name": "BaseBdev3", 00:11:08.733 "uuid": "b7532961-e982-4959-b357-9595d62e1300", 00:11:08.733 "is_configured": true, 00:11:08.733 "data_offset": 0, 00:11:08.733 "data_size": 65536 00:11:08.733 }, 00:11:08.733 { 00:11:08.733 "name": "BaseBdev4", 00:11:08.733 "uuid": "fed549fd-f068-4f68-9f25-035e9e85efe8", 00:11:08.733 "is_configured": true, 00:11:08.733 "data_offset": 0, 00:11:08.733 "data_size": 65536 00:11:08.733 } 00:11:08.733 ] 00:11:08.733 }' 00:11:08.733 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.733 04:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.994 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:08.994 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.994 04:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.994 04:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.994 04:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.994 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:08.994 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:08.994 04:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.994 04:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.994 [2024-11-21 04:09:08.946783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:09.255 04:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.255 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:09.255 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.255 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.255 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.255 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.255 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.255 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.255 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.255 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.255 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.255 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.255 04:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.255 04:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.255 04:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.255 04:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.255 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.255 "name": "Existed_Raid", 00:11:09.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.255 "strip_size_kb": 0, 00:11:09.255 "state": "configuring", 00:11:09.255 "raid_level": "raid1", 00:11:09.255 "superblock": false, 00:11:09.255 "num_base_bdevs": 4, 00:11:09.255 "num_base_bdevs_discovered": 2, 00:11:09.255 "num_base_bdevs_operational": 4, 00:11:09.256 "base_bdevs_list": [ 00:11:09.256 { 00:11:09.256 "name": null, 00:11:09.256 "uuid": "c17b9e10-c3a6-4124-9205-7e45f3aa65bc", 00:11:09.256 "is_configured": false, 00:11:09.256 "data_offset": 0, 00:11:09.256 "data_size": 65536 00:11:09.256 }, 00:11:09.256 { 00:11:09.256 "name": null, 00:11:09.256 "uuid": "1efb8696-42fe-48e7-b1db-4ceecea1354d", 00:11:09.256 "is_configured": false, 00:11:09.256 "data_offset": 0, 00:11:09.256 "data_size": 65536 00:11:09.256 }, 00:11:09.256 { 00:11:09.256 "name": "BaseBdev3", 00:11:09.256 "uuid": "b7532961-e982-4959-b357-9595d62e1300", 00:11:09.256 "is_configured": true, 00:11:09.256 "data_offset": 0, 00:11:09.256 "data_size": 65536 00:11:09.256 }, 00:11:09.256 { 00:11:09.256 "name": "BaseBdev4", 00:11:09.256 "uuid": "fed549fd-f068-4f68-9f25-035e9e85efe8", 00:11:09.256 "is_configured": true, 00:11:09.256 "data_offset": 0, 00:11:09.256 "data_size": 65536 00:11:09.256 } 00:11:09.256 ] 00:11:09.256 }' 00:11:09.256 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.256 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.514 [2024-11-21 04:09:09.418408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.514 "name": "Existed_Raid", 00:11:09.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.514 "strip_size_kb": 0, 00:11:09.514 "state": "configuring", 00:11:09.514 "raid_level": "raid1", 00:11:09.514 "superblock": false, 00:11:09.514 "num_base_bdevs": 4, 00:11:09.514 "num_base_bdevs_discovered": 3, 00:11:09.514 "num_base_bdevs_operational": 4, 00:11:09.514 "base_bdevs_list": [ 00:11:09.514 { 00:11:09.514 "name": null, 00:11:09.514 "uuid": "c17b9e10-c3a6-4124-9205-7e45f3aa65bc", 00:11:09.514 "is_configured": false, 00:11:09.514 "data_offset": 0, 00:11:09.514 "data_size": 65536 00:11:09.514 }, 00:11:09.514 { 00:11:09.514 "name": "BaseBdev2", 00:11:09.514 "uuid": "1efb8696-42fe-48e7-b1db-4ceecea1354d", 00:11:09.514 "is_configured": true, 00:11:09.514 "data_offset": 0, 00:11:09.514 "data_size": 65536 00:11:09.514 }, 00:11:09.514 { 00:11:09.514 "name": "BaseBdev3", 00:11:09.514 "uuid": "b7532961-e982-4959-b357-9595d62e1300", 00:11:09.514 "is_configured": true, 00:11:09.514 "data_offset": 0, 00:11:09.514 "data_size": 65536 00:11:09.514 }, 00:11:09.514 { 00:11:09.514 "name": "BaseBdev4", 00:11:09.514 "uuid": "fed549fd-f068-4f68-9f25-035e9e85efe8", 00:11:09.514 "is_configured": true, 00:11:09.514 "data_offset": 0, 00:11:09.514 "data_size": 65536 00:11:09.514 } 00:11:09.514 ] 00:11:09.514 }' 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.514 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c17b9e10-c3a6-4124-9205-7e45f3aa65bc 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.083 [2024-11-21 04:09:09.866334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:10.083 [2024-11-21 04:09:09.866442] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:11:10.083 [2024-11-21 04:09:09.866470] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:10.083 [2024-11-21 04:09:09.866817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:11:10.083 [2024-11-21 04:09:09.867047] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:11:10.083 [2024-11-21 04:09:09.867086] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:11:10.083 [2024-11-21 04:09:09.867400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.083 NewBaseBdev 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.083 [ 00:11:10.083 { 00:11:10.083 "name": "NewBaseBdev", 00:11:10.083 "aliases": [ 00:11:10.083 "c17b9e10-c3a6-4124-9205-7e45f3aa65bc" 00:11:10.083 ], 00:11:10.083 "product_name": "Malloc disk", 00:11:10.083 "block_size": 512, 00:11:10.083 "num_blocks": 65536, 00:11:10.083 "uuid": "c17b9e10-c3a6-4124-9205-7e45f3aa65bc", 00:11:10.083 "assigned_rate_limits": { 00:11:10.083 "rw_ios_per_sec": 0, 00:11:10.083 "rw_mbytes_per_sec": 0, 00:11:10.083 "r_mbytes_per_sec": 0, 00:11:10.083 "w_mbytes_per_sec": 0 00:11:10.083 }, 00:11:10.083 "claimed": true, 00:11:10.083 "claim_type": "exclusive_write", 00:11:10.083 "zoned": false, 00:11:10.083 "supported_io_types": { 00:11:10.083 "read": true, 00:11:10.083 "write": true, 00:11:10.083 "unmap": true, 00:11:10.083 "flush": true, 00:11:10.083 "reset": true, 00:11:10.083 "nvme_admin": false, 00:11:10.083 "nvme_io": false, 00:11:10.083 "nvme_io_md": false, 00:11:10.083 "write_zeroes": true, 00:11:10.083 "zcopy": true, 00:11:10.083 "get_zone_info": false, 00:11:10.083 "zone_management": false, 00:11:10.083 "zone_append": false, 00:11:10.083 "compare": false, 00:11:10.083 "compare_and_write": false, 00:11:10.083 "abort": true, 00:11:10.083 "seek_hole": false, 00:11:10.083 "seek_data": false, 00:11:10.083 "copy": true, 00:11:10.083 "nvme_iov_md": false 00:11:10.083 }, 00:11:10.083 "memory_domains": [ 00:11:10.083 { 00:11:10.083 "dma_device_id": "system", 00:11:10.083 "dma_device_type": 1 00:11:10.083 }, 00:11:10.083 { 00:11:10.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.083 "dma_device_type": 2 00:11:10.083 } 00:11:10.083 ], 00:11:10.083 "driver_specific": {} 00:11:10.083 } 00:11:10.083 ] 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:10.083 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:10.084 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.084 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.084 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.084 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.084 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.084 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.084 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.084 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.084 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.084 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.084 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.084 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.084 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.084 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.084 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.084 "name": "Existed_Raid", 00:11:10.084 "uuid": "4d57e90d-24b4-404c-9186-d67f073cbc11", 00:11:10.084 "strip_size_kb": 0, 00:11:10.084 "state": "online", 00:11:10.084 "raid_level": "raid1", 00:11:10.084 "superblock": false, 00:11:10.084 "num_base_bdevs": 4, 00:11:10.084 "num_base_bdevs_discovered": 4, 00:11:10.084 "num_base_bdevs_operational": 4, 00:11:10.084 "base_bdevs_list": [ 00:11:10.084 { 00:11:10.084 "name": "NewBaseBdev", 00:11:10.084 "uuid": "c17b9e10-c3a6-4124-9205-7e45f3aa65bc", 00:11:10.084 "is_configured": true, 00:11:10.084 "data_offset": 0, 00:11:10.084 "data_size": 65536 00:11:10.084 }, 00:11:10.084 { 00:11:10.084 "name": "BaseBdev2", 00:11:10.084 "uuid": "1efb8696-42fe-48e7-b1db-4ceecea1354d", 00:11:10.084 "is_configured": true, 00:11:10.084 "data_offset": 0, 00:11:10.084 "data_size": 65536 00:11:10.084 }, 00:11:10.084 { 00:11:10.084 "name": "BaseBdev3", 00:11:10.084 "uuid": "b7532961-e982-4959-b357-9595d62e1300", 00:11:10.084 "is_configured": true, 00:11:10.084 "data_offset": 0, 00:11:10.084 "data_size": 65536 00:11:10.084 }, 00:11:10.084 { 00:11:10.084 "name": "BaseBdev4", 00:11:10.084 "uuid": "fed549fd-f068-4f68-9f25-035e9e85efe8", 00:11:10.084 "is_configured": true, 00:11:10.084 "data_offset": 0, 00:11:10.084 "data_size": 65536 00:11:10.084 } 00:11:10.084 ] 00:11:10.084 }' 00:11:10.084 04:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.084 04:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.653 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:10.653 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:10.653 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:10.653 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:10.653 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:10.653 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:10.653 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:10.653 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:10.653 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.653 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.653 [2024-11-21 04:09:10.365938] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.653 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.653 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:10.653 "name": "Existed_Raid", 00:11:10.653 "aliases": [ 00:11:10.653 "4d57e90d-24b4-404c-9186-d67f073cbc11" 00:11:10.653 ], 00:11:10.653 "product_name": "Raid Volume", 00:11:10.653 "block_size": 512, 00:11:10.653 "num_blocks": 65536, 00:11:10.653 "uuid": "4d57e90d-24b4-404c-9186-d67f073cbc11", 00:11:10.653 "assigned_rate_limits": { 00:11:10.653 "rw_ios_per_sec": 0, 00:11:10.653 "rw_mbytes_per_sec": 0, 00:11:10.653 "r_mbytes_per_sec": 0, 00:11:10.653 "w_mbytes_per_sec": 0 00:11:10.653 }, 00:11:10.653 "claimed": false, 00:11:10.653 "zoned": false, 00:11:10.653 "supported_io_types": { 00:11:10.653 "read": true, 00:11:10.653 "write": true, 00:11:10.654 "unmap": false, 00:11:10.654 "flush": false, 00:11:10.654 "reset": true, 00:11:10.654 "nvme_admin": false, 00:11:10.654 "nvme_io": false, 00:11:10.654 "nvme_io_md": false, 00:11:10.654 "write_zeroes": true, 00:11:10.654 "zcopy": false, 00:11:10.654 "get_zone_info": false, 00:11:10.654 "zone_management": false, 00:11:10.654 "zone_append": false, 00:11:10.654 "compare": false, 00:11:10.654 "compare_and_write": false, 00:11:10.654 "abort": false, 00:11:10.654 "seek_hole": false, 00:11:10.654 "seek_data": false, 00:11:10.654 "copy": false, 00:11:10.654 "nvme_iov_md": false 00:11:10.654 }, 00:11:10.654 "memory_domains": [ 00:11:10.654 { 00:11:10.654 "dma_device_id": "system", 00:11:10.654 "dma_device_type": 1 00:11:10.654 }, 00:11:10.654 { 00:11:10.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.654 "dma_device_type": 2 00:11:10.654 }, 00:11:10.654 { 00:11:10.654 "dma_device_id": "system", 00:11:10.654 "dma_device_type": 1 00:11:10.654 }, 00:11:10.654 { 00:11:10.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.654 "dma_device_type": 2 00:11:10.654 }, 00:11:10.654 { 00:11:10.654 "dma_device_id": "system", 00:11:10.654 "dma_device_type": 1 00:11:10.654 }, 00:11:10.654 { 00:11:10.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.654 "dma_device_type": 2 00:11:10.654 }, 00:11:10.654 { 00:11:10.654 "dma_device_id": "system", 00:11:10.654 "dma_device_type": 1 00:11:10.654 }, 00:11:10.654 { 00:11:10.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.654 "dma_device_type": 2 00:11:10.654 } 00:11:10.654 ], 00:11:10.654 "driver_specific": { 00:11:10.654 "raid": { 00:11:10.654 "uuid": "4d57e90d-24b4-404c-9186-d67f073cbc11", 00:11:10.654 "strip_size_kb": 0, 00:11:10.654 "state": "online", 00:11:10.654 "raid_level": "raid1", 00:11:10.654 "superblock": false, 00:11:10.654 "num_base_bdevs": 4, 00:11:10.654 "num_base_bdevs_discovered": 4, 00:11:10.654 "num_base_bdevs_operational": 4, 00:11:10.654 "base_bdevs_list": [ 00:11:10.654 { 00:11:10.654 "name": "NewBaseBdev", 00:11:10.654 "uuid": "c17b9e10-c3a6-4124-9205-7e45f3aa65bc", 00:11:10.654 "is_configured": true, 00:11:10.654 "data_offset": 0, 00:11:10.654 "data_size": 65536 00:11:10.654 }, 00:11:10.654 { 00:11:10.654 "name": "BaseBdev2", 00:11:10.654 "uuid": "1efb8696-42fe-48e7-b1db-4ceecea1354d", 00:11:10.654 "is_configured": true, 00:11:10.654 "data_offset": 0, 00:11:10.654 "data_size": 65536 00:11:10.654 }, 00:11:10.654 { 00:11:10.654 "name": "BaseBdev3", 00:11:10.654 "uuid": "b7532961-e982-4959-b357-9595d62e1300", 00:11:10.654 "is_configured": true, 00:11:10.654 "data_offset": 0, 00:11:10.654 "data_size": 65536 00:11:10.654 }, 00:11:10.654 { 00:11:10.654 "name": "BaseBdev4", 00:11:10.654 "uuid": "fed549fd-f068-4f68-9f25-035e9e85efe8", 00:11:10.654 "is_configured": true, 00:11:10.654 "data_offset": 0, 00:11:10.654 "data_size": 65536 00:11:10.654 } 00:11:10.654 ] 00:11:10.654 } 00:11:10.654 } 00:11:10.654 }' 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:10.654 BaseBdev2 00:11:10.654 BaseBdev3 00:11:10.654 BaseBdev4' 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.654 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.914 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.914 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.914 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.914 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:10.914 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.914 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.914 [2024-11-21 04:09:10.673042] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:10.914 [2024-11-21 04:09:10.673074] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.914 [2024-11-21 04:09:10.673186] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.914 [2024-11-21 04:09:10.673501] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.914 [2024-11-21 04:09:10.673524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:11:10.914 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.914 04:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 84006 00:11:10.914 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 84006 ']' 00:11:10.914 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 84006 00:11:10.914 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:10.914 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.914 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84006 00:11:10.914 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:10.914 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:10.914 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84006' 00:11:10.914 killing process with pid 84006 00:11:10.914 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 84006 00:11:10.914 [2024-11-21 04:09:10.722437] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:10.914 04:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 84006 00:11:10.914 [2024-11-21 04:09:10.800532] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:11.174 04:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:11.174 00:11:11.174 real 0m9.667s 00:11:11.174 user 0m16.110s 00:11:11.174 sys 0m2.178s 00:11:11.174 ************************************ 00:11:11.174 END TEST raid_state_function_test 00:11:11.174 ************************************ 00:11:11.174 04:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.174 04:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.435 04:09:11 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:11.435 04:09:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:11.435 04:09:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.435 04:09:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:11.435 ************************************ 00:11:11.435 START TEST raid_state_function_test_sb 00:11:11.435 ************************************ 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84661 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84661' 00:11:11.435 Process raid pid: 84661 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84661 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84661 ']' 00:11:11.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.435 04:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.435 [2024-11-21 04:09:11.298841] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:11:11.435 [2024-11-21 04:09:11.299036] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.695 [2024-11-21 04:09:11.455891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.695 [2024-11-21 04:09:11.498857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.695 [2024-11-21 04:09:11.575168] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.695 [2024-11-21 04:09:11.575211] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.264 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.264 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:12.264 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:12.264 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.264 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.264 [2024-11-21 04:09:12.150243] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:12.264 [2024-11-21 04:09:12.150401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:12.264 [2024-11-21 04:09:12.150431] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:12.264 [2024-11-21 04:09:12.150454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:12.264 [2024-11-21 04:09:12.150471] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:12.264 [2024-11-21 04:09:12.150494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:12.264 [2024-11-21 04:09:12.150510] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:12.264 [2024-11-21 04:09:12.150558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:12.264 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.264 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:12.264 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.264 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.264 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.264 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.264 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.265 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.265 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.265 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.265 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.265 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.265 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.265 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.265 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.265 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.265 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.265 "name": "Existed_Raid", 00:11:12.265 "uuid": "9cb2fb97-d680-4082-95c7-24cf33b1fe9b", 00:11:12.265 "strip_size_kb": 0, 00:11:12.265 "state": "configuring", 00:11:12.265 "raid_level": "raid1", 00:11:12.265 "superblock": true, 00:11:12.265 "num_base_bdevs": 4, 00:11:12.265 "num_base_bdevs_discovered": 0, 00:11:12.265 "num_base_bdevs_operational": 4, 00:11:12.265 "base_bdevs_list": [ 00:11:12.265 { 00:11:12.265 "name": "BaseBdev1", 00:11:12.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.265 "is_configured": false, 00:11:12.265 "data_offset": 0, 00:11:12.265 "data_size": 0 00:11:12.265 }, 00:11:12.265 { 00:11:12.265 "name": "BaseBdev2", 00:11:12.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.265 "is_configured": false, 00:11:12.265 "data_offset": 0, 00:11:12.265 "data_size": 0 00:11:12.265 }, 00:11:12.265 { 00:11:12.265 "name": "BaseBdev3", 00:11:12.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.265 "is_configured": false, 00:11:12.265 "data_offset": 0, 00:11:12.265 "data_size": 0 00:11:12.265 }, 00:11:12.265 { 00:11:12.265 "name": "BaseBdev4", 00:11:12.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.265 "is_configured": false, 00:11:12.265 "data_offset": 0, 00:11:12.265 "data_size": 0 00:11:12.265 } 00:11:12.265 ] 00:11:12.265 }' 00:11:12.265 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.265 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.835 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:12.835 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.835 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.835 [2024-11-21 04:09:12.593384] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:12.835 [2024-11-21 04:09:12.593528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:11:12.835 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.835 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:12.835 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.835 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.835 [2024-11-21 04:09:12.605402] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:12.835 [2024-11-21 04:09:12.605456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:12.835 [2024-11-21 04:09:12.605465] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:12.835 [2024-11-21 04:09:12.605476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:12.835 [2024-11-21 04:09:12.605482] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:12.835 [2024-11-21 04:09:12.605492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:12.835 [2024-11-21 04:09:12.605498] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:12.835 [2024-11-21 04:09:12.605507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:12.835 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.835 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:12.835 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.835 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.835 [2024-11-21 04:09:12.632627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.835 BaseBdev1 00:11:12.835 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.835 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:12.835 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:12.835 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.835 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:12.835 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.835 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.835 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.835 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.835 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.835 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.835 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:12.836 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.836 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.836 [ 00:11:12.836 { 00:11:12.836 "name": "BaseBdev1", 00:11:12.836 "aliases": [ 00:11:12.836 "55d4be8f-55a9-4e7e-85cf-2bd45b3aaaf1" 00:11:12.836 ], 00:11:12.836 "product_name": "Malloc disk", 00:11:12.836 "block_size": 512, 00:11:12.836 "num_blocks": 65536, 00:11:12.836 "uuid": "55d4be8f-55a9-4e7e-85cf-2bd45b3aaaf1", 00:11:12.836 "assigned_rate_limits": { 00:11:12.836 "rw_ios_per_sec": 0, 00:11:12.836 "rw_mbytes_per_sec": 0, 00:11:12.836 "r_mbytes_per_sec": 0, 00:11:12.836 "w_mbytes_per_sec": 0 00:11:12.836 }, 00:11:12.836 "claimed": true, 00:11:12.836 "claim_type": "exclusive_write", 00:11:12.836 "zoned": false, 00:11:12.836 "supported_io_types": { 00:11:12.836 "read": true, 00:11:12.836 "write": true, 00:11:12.836 "unmap": true, 00:11:12.836 "flush": true, 00:11:12.836 "reset": true, 00:11:12.836 "nvme_admin": false, 00:11:12.836 "nvme_io": false, 00:11:12.836 "nvme_io_md": false, 00:11:12.836 "write_zeroes": true, 00:11:12.836 "zcopy": true, 00:11:12.836 "get_zone_info": false, 00:11:12.836 "zone_management": false, 00:11:12.836 "zone_append": false, 00:11:12.836 "compare": false, 00:11:12.836 "compare_and_write": false, 00:11:12.836 "abort": true, 00:11:12.836 "seek_hole": false, 00:11:12.836 "seek_data": false, 00:11:12.836 "copy": true, 00:11:12.836 "nvme_iov_md": false 00:11:12.836 }, 00:11:12.836 "memory_domains": [ 00:11:12.836 { 00:11:12.836 "dma_device_id": "system", 00:11:12.836 "dma_device_type": 1 00:11:12.836 }, 00:11:12.836 { 00:11:12.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.836 "dma_device_type": 2 00:11:12.836 } 00:11:12.836 ], 00:11:12.836 "driver_specific": {} 00:11:12.836 } 00:11:12.836 ] 00:11:12.836 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.836 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:12.836 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:12.836 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.836 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.836 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.836 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.836 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.836 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.836 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.836 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.836 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.836 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.836 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.836 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.836 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.836 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.836 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.836 "name": "Existed_Raid", 00:11:12.836 "uuid": "577a1ef0-1bc7-4f35-88b8-d49bdc5a5dde", 00:11:12.836 "strip_size_kb": 0, 00:11:12.836 "state": "configuring", 00:11:12.836 "raid_level": "raid1", 00:11:12.836 "superblock": true, 00:11:12.836 "num_base_bdevs": 4, 00:11:12.836 "num_base_bdevs_discovered": 1, 00:11:12.836 "num_base_bdevs_operational": 4, 00:11:12.836 "base_bdevs_list": [ 00:11:12.836 { 00:11:12.836 "name": "BaseBdev1", 00:11:12.836 "uuid": "55d4be8f-55a9-4e7e-85cf-2bd45b3aaaf1", 00:11:12.836 "is_configured": true, 00:11:12.836 "data_offset": 2048, 00:11:12.836 "data_size": 63488 00:11:12.836 }, 00:11:12.836 { 00:11:12.836 "name": "BaseBdev2", 00:11:12.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.836 "is_configured": false, 00:11:12.836 "data_offset": 0, 00:11:12.836 "data_size": 0 00:11:12.836 }, 00:11:12.836 { 00:11:12.836 "name": "BaseBdev3", 00:11:12.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.836 "is_configured": false, 00:11:12.836 "data_offset": 0, 00:11:12.836 "data_size": 0 00:11:12.836 }, 00:11:12.836 { 00:11:12.836 "name": "BaseBdev4", 00:11:12.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.836 "is_configured": false, 00:11:12.836 "data_offset": 0, 00:11:12.836 "data_size": 0 00:11:12.836 } 00:11:12.836 ] 00:11:12.836 }' 00:11:12.836 04:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.836 04:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.407 [2024-11-21 04:09:13.075987] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:13.407 [2024-11-21 04:09:13.076089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.407 [2024-11-21 04:09:13.087983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.407 [2024-11-21 04:09:13.090339] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:13.407 [2024-11-21 04:09:13.090383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:13.407 [2024-11-21 04:09:13.090393] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:13.407 [2024-11-21 04:09:13.090402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:13.407 [2024-11-21 04:09:13.090408] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:13.407 [2024-11-21 04:09:13.090416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.407 "name": "Existed_Raid", 00:11:13.407 "uuid": "0fc38a4b-f6b3-41cc-aba9-aa22443ef48e", 00:11:13.407 "strip_size_kb": 0, 00:11:13.407 "state": "configuring", 00:11:13.407 "raid_level": "raid1", 00:11:13.407 "superblock": true, 00:11:13.407 "num_base_bdevs": 4, 00:11:13.407 "num_base_bdevs_discovered": 1, 00:11:13.407 "num_base_bdevs_operational": 4, 00:11:13.407 "base_bdevs_list": [ 00:11:13.407 { 00:11:13.407 "name": "BaseBdev1", 00:11:13.407 "uuid": "55d4be8f-55a9-4e7e-85cf-2bd45b3aaaf1", 00:11:13.407 "is_configured": true, 00:11:13.407 "data_offset": 2048, 00:11:13.407 "data_size": 63488 00:11:13.407 }, 00:11:13.407 { 00:11:13.407 "name": "BaseBdev2", 00:11:13.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.407 "is_configured": false, 00:11:13.407 "data_offset": 0, 00:11:13.407 "data_size": 0 00:11:13.407 }, 00:11:13.407 { 00:11:13.407 "name": "BaseBdev3", 00:11:13.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.407 "is_configured": false, 00:11:13.407 "data_offset": 0, 00:11:13.407 "data_size": 0 00:11:13.407 }, 00:11:13.407 { 00:11:13.407 "name": "BaseBdev4", 00:11:13.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.407 "is_configured": false, 00:11:13.407 "data_offset": 0, 00:11:13.407 "data_size": 0 00:11:13.407 } 00:11:13.407 ] 00:11:13.407 }' 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.407 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.668 [2024-11-21 04:09:13.516511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:13.668 BaseBdev2 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.668 [ 00:11:13.668 { 00:11:13.668 "name": "BaseBdev2", 00:11:13.668 "aliases": [ 00:11:13.668 "99497f6a-742a-48d8-95dc-828609f5895c" 00:11:13.668 ], 00:11:13.668 "product_name": "Malloc disk", 00:11:13.668 "block_size": 512, 00:11:13.668 "num_blocks": 65536, 00:11:13.668 "uuid": "99497f6a-742a-48d8-95dc-828609f5895c", 00:11:13.668 "assigned_rate_limits": { 00:11:13.668 "rw_ios_per_sec": 0, 00:11:13.668 "rw_mbytes_per_sec": 0, 00:11:13.668 "r_mbytes_per_sec": 0, 00:11:13.668 "w_mbytes_per_sec": 0 00:11:13.668 }, 00:11:13.668 "claimed": true, 00:11:13.668 "claim_type": "exclusive_write", 00:11:13.668 "zoned": false, 00:11:13.668 "supported_io_types": { 00:11:13.668 "read": true, 00:11:13.668 "write": true, 00:11:13.668 "unmap": true, 00:11:13.668 "flush": true, 00:11:13.668 "reset": true, 00:11:13.668 "nvme_admin": false, 00:11:13.668 "nvme_io": false, 00:11:13.668 "nvme_io_md": false, 00:11:13.668 "write_zeroes": true, 00:11:13.668 "zcopy": true, 00:11:13.668 "get_zone_info": false, 00:11:13.668 "zone_management": false, 00:11:13.668 "zone_append": false, 00:11:13.668 "compare": false, 00:11:13.668 "compare_and_write": false, 00:11:13.668 "abort": true, 00:11:13.668 "seek_hole": false, 00:11:13.668 "seek_data": false, 00:11:13.668 "copy": true, 00:11:13.668 "nvme_iov_md": false 00:11:13.668 }, 00:11:13.668 "memory_domains": [ 00:11:13.668 { 00:11:13.668 "dma_device_id": "system", 00:11:13.668 "dma_device_type": 1 00:11:13.668 }, 00:11:13.668 { 00:11:13.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.668 "dma_device_type": 2 00:11:13.668 } 00:11:13.668 ], 00:11:13.668 "driver_specific": {} 00:11:13.668 } 00:11:13.668 ] 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.668 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.668 "name": "Existed_Raid", 00:11:13.668 "uuid": "0fc38a4b-f6b3-41cc-aba9-aa22443ef48e", 00:11:13.668 "strip_size_kb": 0, 00:11:13.668 "state": "configuring", 00:11:13.668 "raid_level": "raid1", 00:11:13.668 "superblock": true, 00:11:13.668 "num_base_bdevs": 4, 00:11:13.668 "num_base_bdevs_discovered": 2, 00:11:13.668 "num_base_bdevs_operational": 4, 00:11:13.668 "base_bdevs_list": [ 00:11:13.668 { 00:11:13.668 "name": "BaseBdev1", 00:11:13.668 "uuid": "55d4be8f-55a9-4e7e-85cf-2bd45b3aaaf1", 00:11:13.668 "is_configured": true, 00:11:13.668 "data_offset": 2048, 00:11:13.668 "data_size": 63488 00:11:13.668 }, 00:11:13.668 { 00:11:13.668 "name": "BaseBdev2", 00:11:13.668 "uuid": "99497f6a-742a-48d8-95dc-828609f5895c", 00:11:13.668 "is_configured": true, 00:11:13.668 "data_offset": 2048, 00:11:13.668 "data_size": 63488 00:11:13.668 }, 00:11:13.668 { 00:11:13.668 "name": "BaseBdev3", 00:11:13.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.668 "is_configured": false, 00:11:13.668 "data_offset": 0, 00:11:13.668 "data_size": 0 00:11:13.668 }, 00:11:13.668 { 00:11:13.668 "name": "BaseBdev4", 00:11:13.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.668 "is_configured": false, 00:11:13.668 "data_offset": 0, 00:11:13.669 "data_size": 0 00:11:13.669 } 00:11:13.669 ] 00:11:13.669 }' 00:11:13.669 04:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.669 04:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.239 [2024-11-21 04:09:14.037490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:14.239 BaseBdev3 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.239 [ 00:11:14.239 { 00:11:14.239 "name": "BaseBdev3", 00:11:14.239 "aliases": [ 00:11:14.239 "2696d6a9-a45e-47ed-8cb4-b20af57f8b84" 00:11:14.239 ], 00:11:14.239 "product_name": "Malloc disk", 00:11:14.239 "block_size": 512, 00:11:14.239 "num_blocks": 65536, 00:11:14.239 "uuid": "2696d6a9-a45e-47ed-8cb4-b20af57f8b84", 00:11:14.239 "assigned_rate_limits": { 00:11:14.239 "rw_ios_per_sec": 0, 00:11:14.239 "rw_mbytes_per_sec": 0, 00:11:14.239 "r_mbytes_per_sec": 0, 00:11:14.239 "w_mbytes_per_sec": 0 00:11:14.239 }, 00:11:14.239 "claimed": true, 00:11:14.239 "claim_type": "exclusive_write", 00:11:14.239 "zoned": false, 00:11:14.239 "supported_io_types": { 00:11:14.239 "read": true, 00:11:14.239 "write": true, 00:11:14.239 "unmap": true, 00:11:14.239 "flush": true, 00:11:14.239 "reset": true, 00:11:14.239 "nvme_admin": false, 00:11:14.239 "nvme_io": false, 00:11:14.239 "nvme_io_md": false, 00:11:14.239 "write_zeroes": true, 00:11:14.239 "zcopy": true, 00:11:14.239 "get_zone_info": false, 00:11:14.239 "zone_management": false, 00:11:14.239 "zone_append": false, 00:11:14.239 "compare": false, 00:11:14.239 "compare_and_write": false, 00:11:14.239 "abort": true, 00:11:14.239 "seek_hole": false, 00:11:14.239 "seek_data": false, 00:11:14.239 "copy": true, 00:11:14.239 "nvme_iov_md": false 00:11:14.239 }, 00:11:14.239 "memory_domains": [ 00:11:14.239 { 00:11:14.239 "dma_device_id": "system", 00:11:14.239 "dma_device_type": 1 00:11:14.239 }, 00:11:14.239 { 00:11:14.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.239 "dma_device_type": 2 00:11:14.239 } 00:11:14.239 ], 00:11:14.239 "driver_specific": {} 00:11:14.239 } 00:11:14.239 ] 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.239 "name": "Existed_Raid", 00:11:14.239 "uuid": "0fc38a4b-f6b3-41cc-aba9-aa22443ef48e", 00:11:14.239 "strip_size_kb": 0, 00:11:14.239 "state": "configuring", 00:11:14.239 "raid_level": "raid1", 00:11:14.239 "superblock": true, 00:11:14.239 "num_base_bdevs": 4, 00:11:14.239 "num_base_bdevs_discovered": 3, 00:11:14.239 "num_base_bdevs_operational": 4, 00:11:14.239 "base_bdevs_list": [ 00:11:14.239 { 00:11:14.239 "name": "BaseBdev1", 00:11:14.239 "uuid": "55d4be8f-55a9-4e7e-85cf-2bd45b3aaaf1", 00:11:14.239 "is_configured": true, 00:11:14.239 "data_offset": 2048, 00:11:14.239 "data_size": 63488 00:11:14.239 }, 00:11:14.239 { 00:11:14.239 "name": "BaseBdev2", 00:11:14.239 "uuid": "99497f6a-742a-48d8-95dc-828609f5895c", 00:11:14.239 "is_configured": true, 00:11:14.239 "data_offset": 2048, 00:11:14.239 "data_size": 63488 00:11:14.239 }, 00:11:14.239 { 00:11:14.239 "name": "BaseBdev3", 00:11:14.239 "uuid": "2696d6a9-a45e-47ed-8cb4-b20af57f8b84", 00:11:14.239 "is_configured": true, 00:11:14.239 "data_offset": 2048, 00:11:14.239 "data_size": 63488 00:11:14.239 }, 00:11:14.239 { 00:11:14.239 "name": "BaseBdev4", 00:11:14.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.239 "is_configured": false, 00:11:14.239 "data_offset": 0, 00:11:14.239 "data_size": 0 00:11:14.239 } 00:11:14.239 ] 00:11:14.239 }' 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.239 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.810 [2024-11-21 04:09:14.505881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:14.810 [2024-11-21 04:09:14.506274] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:11:14.810 [2024-11-21 04:09:14.506327] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:14.810 [2024-11-21 04:09:14.506711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:11:14.810 BaseBdev4 00:11:14.810 [2024-11-21 04:09:14.506936] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:11:14.810 [2024-11-21 04:09:14.506957] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:11:14.810 [2024-11-21 04:09:14.507116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.810 [ 00:11:14.810 { 00:11:14.810 "name": "BaseBdev4", 00:11:14.810 "aliases": [ 00:11:14.810 "6a84d212-f80d-485c-9dc1-a13dc061c049" 00:11:14.810 ], 00:11:14.810 "product_name": "Malloc disk", 00:11:14.810 "block_size": 512, 00:11:14.810 "num_blocks": 65536, 00:11:14.810 "uuid": "6a84d212-f80d-485c-9dc1-a13dc061c049", 00:11:14.810 "assigned_rate_limits": { 00:11:14.810 "rw_ios_per_sec": 0, 00:11:14.810 "rw_mbytes_per_sec": 0, 00:11:14.810 "r_mbytes_per_sec": 0, 00:11:14.810 "w_mbytes_per_sec": 0 00:11:14.810 }, 00:11:14.810 "claimed": true, 00:11:14.810 "claim_type": "exclusive_write", 00:11:14.810 "zoned": false, 00:11:14.810 "supported_io_types": { 00:11:14.810 "read": true, 00:11:14.810 "write": true, 00:11:14.810 "unmap": true, 00:11:14.810 "flush": true, 00:11:14.810 "reset": true, 00:11:14.810 "nvme_admin": false, 00:11:14.810 "nvme_io": false, 00:11:14.810 "nvme_io_md": false, 00:11:14.810 "write_zeroes": true, 00:11:14.810 "zcopy": true, 00:11:14.810 "get_zone_info": false, 00:11:14.810 "zone_management": false, 00:11:14.810 "zone_append": false, 00:11:14.810 "compare": false, 00:11:14.810 "compare_and_write": false, 00:11:14.810 "abort": true, 00:11:14.810 "seek_hole": false, 00:11:14.810 "seek_data": false, 00:11:14.810 "copy": true, 00:11:14.810 "nvme_iov_md": false 00:11:14.810 }, 00:11:14.810 "memory_domains": [ 00:11:14.810 { 00:11:14.810 "dma_device_id": "system", 00:11:14.810 "dma_device_type": 1 00:11:14.810 }, 00:11:14.810 { 00:11:14.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.810 "dma_device_type": 2 00:11:14.810 } 00:11:14.810 ], 00:11:14.810 "driver_specific": {} 00:11:14.810 } 00:11:14.810 ] 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.810 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.811 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.811 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.811 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.811 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.811 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.811 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.811 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.811 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.811 "name": "Existed_Raid", 00:11:14.811 "uuid": "0fc38a4b-f6b3-41cc-aba9-aa22443ef48e", 00:11:14.811 "strip_size_kb": 0, 00:11:14.811 "state": "online", 00:11:14.811 "raid_level": "raid1", 00:11:14.811 "superblock": true, 00:11:14.811 "num_base_bdevs": 4, 00:11:14.811 "num_base_bdevs_discovered": 4, 00:11:14.811 "num_base_bdevs_operational": 4, 00:11:14.811 "base_bdevs_list": [ 00:11:14.811 { 00:11:14.811 "name": "BaseBdev1", 00:11:14.811 "uuid": "55d4be8f-55a9-4e7e-85cf-2bd45b3aaaf1", 00:11:14.811 "is_configured": true, 00:11:14.811 "data_offset": 2048, 00:11:14.811 "data_size": 63488 00:11:14.811 }, 00:11:14.811 { 00:11:14.811 "name": "BaseBdev2", 00:11:14.811 "uuid": "99497f6a-742a-48d8-95dc-828609f5895c", 00:11:14.811 "is_configured": true, 00:11:14.811 "data_offset": 2048, 00:11:14.811 "data_size": 63488 00:11:14.811 }, 00:11:14.811 { 00:11:14.811 "name": "BaseBdev3", 00:11:14.811 "uuid": "2696d6a9-a45e-47ed-8cb4-b20af57f8b84", 00:11:14.811 "is_configured": true, 00:11:14.811 "data_offset": 2048, 00:11:14.811 "data_size": 63488 00:11:14.811 }, 00:11:14.811 { 00:11:14.811 "name": "BaseBdev4", 00:11:14.811 "uuid": "6a84d212-f80d-485c-9dc1-a13dc061c049", 00:11:14.811 "is_configured": true, 00:11:14.811 "data_offset": 2048, 00:11:14.811 "data_size": 63488 00:11:14.811 } 00:11:14.811 ] 00:11:14.811 }' 00:11:14.811 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.811 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.073 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:15.073 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:15.073 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:15.073 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:15.073 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:15.073 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:15.073 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:15.073 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.073 04:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.073 04:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:15.073 [2024-11-21 04:09:14.985644] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.073 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.073 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:15.073 "name": "Existed_Raid", 00:11:15.073 "aliases": [ 00:11:15.073 "0fc38a4b-f6b3-41cc-aba9-aa22443ef48e" 00:11:15.073 ], 00:11:15.073 "product_name": "Raid Volume", 00:11:15.073 "block_size": 512, 00:11:15.073 "num_blocks": 63488, 00:11:15.073 "uuid": "0fc38a4b-f6b3-41cc-aba9-aa22443ef48e", 00:11:15.073 "assigned_rate_limits": { 00:11:15.073 "rw_ios_per_sec": 0, 00:11:15.073 "rw_mbytes_per_sec": 0, 00:11:15.073 "r_mbytes_per_sec": 0, 00:11:15.073 "w_mbytes_per_sec": 0 00:11:15.073 }, 00:11:15.073 "claimed": false, 00:11:15.073 "zoned": false, 00:11:15.073 "supported_io_types": { 00:11:15.073 "read": true, 00:11:15.073 "write": true, 00:11:15.073 "unmap": false, 00:11:15.073 "flush": false, 00:11:15.073 "reset": true, 00:11:15.073 "nvme_admin": false, 00:11:15.073 "nvme_io": false, 00:11:15.073 "nvme_io_md": false, 00:11:15.073 "write_zeroes": true, 00:11:15.073 "zcopy": false, 00:11:15.073 "get_zone_info": false, 00:11:15.073 "zone_management": false, 00:11:15.073 "zone_append": false, 00:11:15.073 "compare": false, 00:11:15.073 "compare_and_write": false, 00:11:15.073 "abort": false, 00:11:15.073 "seek_hole": false, 00:11:15.073 "seek_data": false, 00:11:15.073 "copy": false, 00:11:15.073 "nvme_iov_md": false 00:11:15.073 }, 00:11:15.073 "memory_domains": [ 00:11:15.073 { 00:11:15.073 "dma_device_id": "system", 00:11:15.073 "dma_device_type": 1 00:11:15.073 }, 00:11:15.073 { 00:11:15.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.073 "dma_device_type": 2 00:11:15.073 }, 00:11:15.073 { 00:11:15.073 "dma_device_id": "system", 00:11:15.073 "dma_device_type": 1 00:11:15.073 }, 00:11:15.073 { 00:11:15.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.073 "dma_device_type": 2 00:11:15.073 }, 00:11:15.073 { 00:11:15.073 "dma_device_id": "system", 00:11:15.073 "dma_device_type": 1 00:11:15.073 }, 00:11:15.073 { 00:11:15.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.073 "dma_device_type": 2 00:11:15.073 }, 00:11:15.073 { 00:11:15.073 "dma_device_id": "system", 00:11:15.073 "dma_device_type": 1 00:11:15.073 }, 00:11:15.073 { 00:11:15.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.073 "dma_device_type": 2 00:11:15.073 } 00:11:15.073 ], 00:11:15.073 "driver_specific": { 00:11:15.073 "raid": { 00:11:15.073 "uuid": "0fc38a4b-f6b3-41cc-aba9-aa22443ef48e", 00:11:15.073 "strip_size_kb": 0, 00:11:15.073 "state": "online", 00:11:15.074 "raid_level": "raid1", 00:11:15.074 "superblock": true, 00:11:15.074 "num_base_bdevs": 4, 00:11:15.074 "num_base_bdevs_discovered": 4, 00:11:15.074 "num_base_bdevs_operational": 4, 00:11:15.074 "base_bdevs_list": [ 00:11:15.074 { 00:11:15.074 "name": "BaseBdev1", 00:11:15.074 "uuid": "55d4be8f-55a9-4e7e-85cf-2bd45b3aaaf1", 00:11:15.074 "is_configured": true, 00:11:15.074 "data_offset": 2048, 00:11:15.074 "data_size": 63488 00:11:15.074 }, 00:11:15.074 { 00:11:15.074 "name": "BaseBdev2", 00:11:15.074 "uuid": "99497f6a-742a-48d8-95dc-828609f5895c", 00:11:15.074 "is_configured": true, 00:11:15.074 "data_offset": 2048, 00:11:15.074 "data_size": 63488 00:11:15.074 }, 00:11:15.074 { 00:11:15.074 "name": "BaseBdev3", 00:11:15.074 "uuid": "2696d6a9-a45e-47ed-8cb4-b20af57f8b84", 00:11:15.074 "is_configured": true, 00:11:15.074 "data_offset": 2048, 00:11:15.074 "data_size": 63488 00:11:15.074 }, 00:11:15.074 { 00:11:15.074 "name": "BaseBdev4", 00:11:15.074 "uuid": "6a84d212-f80d-485c-9dc1-a13dc061c049", 00:11:15.074 "is_configured": true, 00:11:15.074 "data_offset": 2048, 00:11:15.074 "data_size": 63488 00:11:15.074 } 00:11:15.074 ] 00:11:15.074 } 00:11:15.074 } 00:11:15.074 }' 00:11:15.074 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:15.338 BaseBdev2 00:11:15.338 BaseBdev3 00:11:15.338 BaseBdev4' 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.338 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.598 [2024-11-21 04:09:15.320781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.598 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.598 "name": "Existed_Raid", 00:11:15.598 "uuid": "0fc38a4b-f6b3-41cc-aba9-aa22443ef48e", 00:11:15.598 "strip_size_kb": 0, 00:11:15.598 "state": "online", 00:11:15.598 "raid_level": "raid1", 00:11:15.598 "superblock": true, 00:11:15.598 "num_base_bdevs": 4, 00:11:15.598 "num_base_bdevs_discovered": 3, 00:11:15.598 "num_base_bdevs_operational": 3, 00:11:15.598 "base_bdevs_list": [ 00:11:15.598 { 00:11:15.598 "name": null, 00:11:15.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.599 "is_configured": false, 00:11:15.599 "data_offset": 0, 00:11:15.599 "data_size": 63488 00:11:15.599 }, 00:11:15.599 { 00:11:15.599 "name": "BaseBdev2", 00:11:15.599 "uuid": "99497f6a-742a-48d8-95dc-828609f5895c", 00:11:15.599 "is_configured": true, 00:11:15.599 "data_offset": 2048, 00:11:15.599 "data_size": 63488 00:11:15.599 }, 00:11:15.599 { 00:11:15.599 "name": "BaseBdev3", 00:11:15.599 "uuid": "2696d6a9-a45e-47ed-8cb4-b20af57f8b84", 00:11:15.599 "is_configured": true, 00:11:15.599 "data_offset": 2048, 00:11:15.599 "data_size": 63488 00:11:15.599 }, 00:11:15.599 { 00:11:15.599 "name": "BaseBdev4", 00:11:15.599 "uuid": "6a84d212-f80d-485c-9dc1-a13dc061c049", 00:11:15.599 "is_configured": true, 00:11:15.599 "data_offset": 2048, 00:11:15.599 "data_size": 63488 00:11:15.599 } 00:11:15.599 ] 00:11:15.599 }' 00:11:15.599 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.599 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.858 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:15.858 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:15.858 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.858 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.858 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.858 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:15.858 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.858 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:15.858 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:15.858 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:15.858 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.858 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.118 [2024-11-21 04:09:15.833295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:16.118 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.118 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:16.118 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:16.118 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:16.118 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.118 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.118 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.118 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.118 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:16.118 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:16.118 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:16.118 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.118 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.118 [2024-11-21 04:09:15.913995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:16.118 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.118 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:16.118 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:16.118 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.118 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:16.118 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.118 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.118 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.118 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:16.118 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:16.118 04:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:16.118 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.118 04:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.118 [2024-11-21 04:09:15.994945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:16.118 [2024-11-21 04:09:15.995130] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:16.118 [2024-11-21 04:09:16.016548] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:16.118 [2024-11-21 04:09:16.016677] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:16.118 [2024-11-21 04:09:16.016723] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:11:16.118 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.118 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:16.118 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:16.118 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.118 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.118 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:16.118 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.118 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.118 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:16.118 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:16.118 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:16.118 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:16.118 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:16.118 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:16.118 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.118 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.378 BaseBdev2 00:11:16.378 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.378 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:16.378 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:16.378 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.378 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:16.378 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.378 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.378 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.378 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.378 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.378 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.378 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:16.378 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.378 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.378 [ 00:11:16.378 { 00:11:16.378 "name": "BaseBdev2", 00:11:16.378 "aliases": [ 00:11:16.378 "7689aa42-83f4-4b66-98f6-1feed79b4081" 00:11:16.378 ], 00:11:16.378 "product_name": "Malloc disk", 00:11:16.378 "block_size": 512, 00:11:16.378 "num_blocks": 65536, 00:11:16.378 "uuid": "7689aa42-83f4-4b66-98f6-1feed79b4081", 00:11:16.378 "assigned_rate_limits": { 00:11:16.378 "rw_ios_per_sec": 0, 00:11:16.378 "rw_mbytes_per_sec": 0, 00:11:16.378 "r_mbytes_per_sec": 0, 00:11:16.378 "w_mbytes_per_sec": 0 00:11:16.378 }, 00:11:16.378 "claimed": false, 00:11:16.378 "zoned": false, 00:11:16.378 "supported_io_types": { 00:11:16.378 "read": true, 00:11:16.378 "write": true, 00:11:16.378 "unmap": true, 00:11:16.378 "flush": true, 00:11:16.378 "reset": true, 00:11:16.378 "nvme_admin": false, 00:11:16.378 "nvme_io": false, 00:11:16.378 "nvme_io_md": false, 00:11:16.378 "write_zeroes": true, 00:11:16.378 "zcopy": true, 00:11:16.378 "get_zone_info": false, 00:11:16.378 "zone_management": false, 00:11:16.378 "zone_append": false, 00:11:16.378 "compare": false, 00:11:16.378 "compare_and_write": false, 00:11:16.378 "abort": true, 00:11:16.378 "seek_hole": false, 00:11:16.378 "seek_data": false, 00:11:16.378 "copy": true, 00:11:16.378 "nvme_iov_md": false 00:11:16.378 }, 00:11:16.378 "memory_domains": [ 00:11:16.378 { 00:11:16.378 "dma_device_id": "system", 00:11:16.378 "dma_device_type": 1 00:11:16.378 }, 00:11:16.378 { 00:11:16.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.378 "dma_device_type": 2 00:11:16.378 } 00:11:16.378 ], 00:11:16.378 "driver_specific": {} 00:11:16.378 } 00:11:16.378 ] 00:11:16.378 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.378 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:16.378 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:16.378 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:16.378 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:16.378 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.378 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.378 BaseBdev3 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.379 [ 00:11:16.379 { 00:11:16.379 "name": "BaseBdev3", 00:11:16.379 "aliases": [ 00:11:16.379 "52091b79-7036-43d6-b8e4-9210d8ea26e5" 00:11:16.379 ], 00:11:16.379 "product_name": "Malloc disk", 00:11:16.379 "block_size": 512, 00:11:16.379 "num_blocks": 65536, 00:11:16.379 "uuid": "52091b79-7036-43d6-b8e4-9210d8ea26e5", 00:11:16.379 "assigned_rate_limits": { 00:11:16.379 "rw_ios_per_sec": 0, 00:11:16.379 "rw_mbytes_per_sec": 0, 00:11:16.379 "r_mbytes_per_sec": 0, 00:11:16.379 "w_mbytes_per_sec": 0 00:11:16.379 }, 00:11:16.379 "claimed": false, 00:11:16.379 "zoned": false, 00:11:16.379 "supported_io_types": { 00:11:16.379 "read": true, 00:11:16.379 "write": true, 00:11:16.379 "unmap": true, 00:11:16.379 "flush": true, 00:11:16.379 "reset": true, 00:11:16.379 "nvme_admin": false, 00:11:16.379 "nvme_io": false, 00:11:16.379 "nvme_io_md": false, 00:11:16.379 "write_zeroes": true, 00:11:16.379 "zcopy": true, 00:11:16.379 "get_zone_info": false, 00:11:16.379 "zone_management": false, 00:11:16.379 "zone_append": false, 00:11:16.379 "compare": false, 00:11:16.379 "compare_and_write": false, 00:11:16.379 "abort": true, 00:11:16.379 "seek_hole": false, 00:11:16.379 "seek_data": false, 00:11:16.379 "copy": true, 00:11:16.379 "nvme_iov_md": false 00:11:16.379 }, 00:11:16.379 "memory_domains": [ 00:11:16.379 { 00:11:16.379 "dma_device_id": "system", 00:11:16.379 "dma_device_type": 1 00:11:16.379 }, 00:11:16.379 { 00:11:16.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.379 "dma_device_type": 2 00:11:16.379 } 00:11:16.379 ], 00:11:16.379 "driver_specific": {} 00:11:16.379 } 00:11:16.379 ] 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.379 BaseBdev4 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.379 [ 00:11:16.379 { 00:11:16.379 "name": "BaseBdev4", 00:11:16.379 "aliases": [ 00:11:16.379 "9ba85835-adcc-4486-80f4-ae54c01eaef4" 00:11:16.379 ], 00:11:16.379 "product_name": "Malloc disk", 00:11:16.379 "block_size": 512, 00:11:16.379 "num_blocks": 65536, 00:11:16.379 "uuid": "9ba85835-adcc-4486-80f4-ae54c01eaef4", 00:11:16.379 "assigned_rate_limits": { 00:11:16.379 "rw_ios_per_sec": 0, 00:11:16.379 "rw_mbytes_per_sec": 0, 00:11:16.379 "r_mbytes_per_sec": 0, 00:11:16.379 "w_mbytes_per_sec": 0 00:11:16.379 }, 00:11:16.379 "claimed": false, 00:11:16.379 "zoned": false, 00:11:16.379 "supported_io_types": { 00:11:16.379 "read": true, 00:11:16.379 "write": true, 00:11:16.379 "unmap": true, 00:11:16.379 "flush": true, 00:11:16.379 "reset": true, 00:11:16.379 "nvme_admin": false, 00:11:16.379 "nvme_io": false, 00:11:16.379 "nvme_io_md": false, 00:11:16.379 "write_zeroes": true, 00:11:16.379 "zcopy": true, 00:11:16.379 "get_zone_info": false, 00:11:16.379 "zone_management": false, 00:11:16.379 "zone_append": false, 00:11:16.379 "compare": false, 00:11:16.379 "compare_and_write": false, 00:11:16.379 "abort": true, 00:11:16.379 "seek_hole": false, 00:11:16.379 "seek_data": false, 00:11:16.379 "copy": true, 00:11:16.379 "nvme_iov_md": false 00:11:16.379 }, 00:11:16.379 "memory_domains": [ 00:11:16.379 { 00:11:16.379 "dma_device_id": "system", 00:11:16.379 "dma_device_type": 1 00:11:16.379 }, 00:11:16.379 { 00:11:16.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.379 "dma_device_type": 2 00:11:16.379 } 00:11:16.379 ], 00:11:16.379 "driver_specific": {} 00:11:16.379 } 00:11:16.379 ] 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.379 [2024-11-21 04:09:16.254889] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:16.379 [2024-11-21 04:09:16.254947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:16.379 [2024-11-21 04:09:16.254974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:16.379 [2024-11-21 04:09:16.257111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:16.379 [2024-11-21 04:09:16.257157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.379 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.379 "name": "Existed_Raid", 00:11:16.379 "uuid": "f3db47a5-008f-493f-8b83-7923e96e3fb6", 00:11:16.379 "strip_size_kb": 0, 00:11:16.379 "state": "configuring", 00:11:16.379 "raid_level": "raid1", 00:11:16.379 "superblock": true, 00:11:16.379 "num_base_bdevs": 4, 00:11:16.379 "num_base_bdevs_discovered": 3, 00:11:16.379 "num_base_bdevs_operational": 4, 00:11:16.379 "base_bdevs_list": [ 00:11:16.379 { 00:11:16.379 "name": "BaseBdev1", 00:11:16.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.379 "is_configured": false, 00:11:16.380 "data_offset": 0, 00:11:16.380 "data_size": 0 00:11:16.380 }, 00:11:16.380 { 00:11:16.380 "name": "BaseBdev2", 00:11:16.380 "uuid": "7689aa42-83f4-4b66-98f6-1feed79b4081", 00:11:16.380 "is_configured": true, 00:11:16.380 "data_offset": 2048, 00:11:16.380 "data_size": 63488 00:11:16.380 }, 00:11:16.380 { 00:11:16.380 "name": "BaseBdev3", 00:11:16.380 "uuid": "52091b79-7036-43d6-b8e4-9210d8ea26e5", 00:11:16.380 "is_configured": true, 00:11:16.380 "data_offset": 2048, 00:11:16.380 "data_size": 63488 00:11:16.380 }, 00:11:16.380 { 00:11:16.380 "name": "BaseBdev4", 00:11:16.380 "uuid": "9ba85835-adcc-4486-80f4-ae54c01eaef4", 00:11:16.380 "is_configured": true, 00:11:16.380 "data_offset": 2048, 00:11:16.380 "data_size": 63488 00:11:16.380 } 00:11:16.380 ] 00:11:16.380 }' 00:11:16.380 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.380 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.950 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:16.950 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.950 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.950 [2024-11-21 04:09:16.698197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:16.950 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.950 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:16.950 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.950 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.950 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.950 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.950 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.950 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.950 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.950 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.950 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.950 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.950 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.950 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.950 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.950 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.950 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.950 "name": "Existed_Raid", 00:11:16.950 "uuid": "f3db47a5-008f-493f-8b83-7923e96e3fb6", 00:11:16.950 "strip_size_kb": 0, 00:11:16.950 "state": "configuring", 00:11:16.950 "raid_level": "raid1", 00:11:16.950 "superblock": true, 00:11:16.950 "num_base_bdevs": 4, 00:11:16.950 "num_base_bdevs_discovered": 2, 00:11:16.950 "num_base_bdevs_operational": 4, 00:11:16.950 "base_bdevs_list": [ 00:11:16.950 { 00:11:16.950 "name": "BaseBdev1", 00:11:16.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.950 "is_configured": false, 00:11:16.950 "data_offset": 0, 00:11:16.950 "data_size": 0 00:11:16.950 }, 00:11:16.950 { 00:11:16.950 "name": null, 00:11:16.950 "uuid": "7689aa42-83f4-4b66-98f6-1feed79b4081", 00:11:16.950 "is_configured": false, 00:11:16.950 "data_offset": 0, 00:11:16.950 "data_size": 63488 00:11:16.950 }, 00:11:16.950 { 00:11:16.950 "name": "BaseBdev3", 00:11:16.950 "uuid": "52091b79-7036-43d6-b8e4-9210d8ea26e5", 00:11:16.950 "is_configured": true, 00:11:16.950 "data_offset": 2048, 00:11:16.950 "data_size": 63488 00:11:16.950 }, 00:11:16.950 { 00:11:16.950 "name": "BaseBdev4", 00:11:16.950 "uuid": "9ba85835-adcc-4486-80f4-ae54c01eaef4", 00:11:16.950 "is_configured": true, 00:11:16.950 "data_offset": 2048, 00:11:16.950 "data_size": 63488 00:11:16.950 } 00:11:16.950 ] 00:11:16.950 }' 00:11:16.950 04:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.950 04:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.209 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.209 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:17.209 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.209 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.469 [2024-11-21 04:09:17.242115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:17.469 BaseBdev1 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.469 [ 00:11:17.469 { 00:11:17.469 "name": "BaseBdev1", 00:11:17.469 "aliases": [ 00:11:17.469 "70e81ffc-8ea9-43ac-a4fa-6f7fe97b2e8e" 00:11:17.469 ], 00:11:17.469 "product_name": "Malloc disk", 00:11:17.469 "block_size": 512, 00:11:17.469 "num_blocks": 65536, 00:11:17.469 "uuid": "70e81ffc-8ea9-43ac-a4fa-6f7fe97b2e8e", 00:11:17.469 "assigned_rate_limits": { 00:11:17.469 "rw_ios_per_sec": 0, 00:11:17.469 "rw_mbytes_per_sec": 0, 00:11:17.469 "r_mbytes_per_sec": 0, 00:11:17.469 "w_mbytes_per_sec": 0 00:11:17.469 }, 00:11:17.469 "claimed": true, 00:11:17.469 "claim_type": "exclusive_write", 00:11:17.469 "zoned": false, 00:11:17.469 "supported_io_types": { 00:11:17.469 "read": true, 00:11:17.469 "write": true, 00:11:17.469 "unmap": true, 00:11:17.469 "flush": true, 00:11:17.469 "reset": true, 00:11:17.469 "nvme_admin": false, 00:11:17.469 "nvme_io": false, 00:11:17.469 "nvme_io_md": false, 00:11:17.469 "write_zeroes": true, 00:11:17.469 "zcopy": true, 00:11:17.469 "get_zone_info": false, 00:11:17.469 "zone_management": false, 00:11:17.469 "zone_append": false, 00:11:17.469 "compare": false, 00:11:17.469 "compare_and_write": false, 00:11:17.469 "abort": true, 00:11:17.469 "seek_hole": false, 00:11:17.469 "seek_data": false, 00:11:17.469 "copy": true, 00:11:17.469 "nvme_iov_md": false 00:11:17.469 }, 00:11:17.469 "memory_domains": [ 00:11:17.469 { 00:11:17.469 "dma_device_id": "system", 00:11:17.469 "dma_device_type": 1 00:11:17.469 }, 00:11:17.469 { 00:11:17.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.469 "dma_device_type": 2 00:11:17.469 } 00:11:17.469 ], 00:11:17.469 "driver_specific": {} 00:11:17.469 } 00:11:17.469 ] 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.469 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.469 "name": "Existed_Raid", 00:11:17.469 "uuid": "f3db47a5-008f-493f-8b83-7923e96e3fb6", 00:11:17.469 "strip_size_kb": 0, 00:11:17.469 "state": "configuring", 00:11:17.469 "raid_level": "raid1", 00:11:17.469 "superblock": true, 00:11:17.469 "num_base_bdevs": 4, 00:11:17.469 "num_base_bdevs_discovered": 3, 00:11:17.469 "num_base_bdevs_operational": 4, 00:11:17.469 "base_bdevs_list": [ 00:11:17.469 { 00:11:17.469 "name": "BaseBdev1", 00:11:17.469 "uuid": "70e81ffc-8ea9-43ac-a4fa-6f7fe97b2e8e", 00:11:17.469 "is_configured": true, 00:11:17.469 "data_offset": 2048, 00:11:17.470 "data_size": 63488 00:11:17.470 }, 00:11:17.470 { 00:11:17.470 "name": null, 00:11:17.470 "uuid": "7689aa42-83f4-4b66-98f6-1feed79b4081", 00:11:17.470 "is_configured": false, 00:11:17.470 "data_offset": 0, 00:11:17.470 "data_size": 63488 00:11:17.470 }, 00:11:17.470 { 00:11:17.470 "name": "BaseBdev3", 00:11:17.470 "uuid": "52091b79-7036-43d6-b8e4-9210d8ea26e5", 00:11:17.470 "is_configured": true, 00:11:17.470 "data_offset": 2048, 00:11:17.470 "data_size": 63488 00:11:17.470 }, 00:11:17.470 { 00:11:17.470 "name": "BaseBdev4", 00:11:17.470 "uuid": "9ba85835-adcc-4486-80f4-ae54c01eaef4", 00:11:17.470 "is_configured": true, 00:11:17.470 "data_offset": 2048, 00:11:17.470 "data_size": 63488 00:11:17.470 } 00:11:17.470 ] 00:11:17.470 }' 00:11:17.470 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.470 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.728 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.729 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:17.729 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.729 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.988 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.988 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:17.988 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:17.988 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.988 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.988 [2024-11-21 04:09:17.733421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:17.988 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.988 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:17.988 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.988 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.988 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.988 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.988 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.988 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.988 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.988 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.988 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.988 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.988 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.988 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.988 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.988 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.988 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.988 "name": "Existed_Raid", 00:11:17.988 "uuid": "f3db47a5-008f-493f-8b83-7923e96e3fb6", 00:11:17.988 "strip_size_kb": 0, 00:11:17.988 "state": "configuring", 00:11:17.988 "raid_level": "raid1", 00:11:17.988 "superblock": true, 00:11:17.988 "num_base_bdevs": 4, 00:11:17.988 "num_base_bdevs_discovered": 2, 00:11:17.988 "num_base_bdevs_operational": 4, 00:11:17.988 "base_bdevs_list": [ 00:11:17.988 { 00:11:17.988 "name": "BaseBdev1", 00:11:17.988 "uuid": "70e81ffc-8ea9-43ac-a4fa-6f7fe97b2e8e", 00:11:17.988 "is_configured": true, 00:11:17.988 "data_offset": 2048, 00:11:17.988 "data_size": 63488 00:11:17.988 }, 00:11:17.988 { 00:11:17.988 "name": null, 00:11:17.988 "uuid": "7689aa42-83f4-4b66-98f6-1feed79b4081", 00:11:17.988 "is_configured": false, 00:11:17.988 "data_offset": 0, 00:11:17.988 "data_size": 63488 00:11:17.988 }, 00:11:17.988 { 00:11:17.988 "name": null, 00:11:17.988 "uuid": "52091b79-7036-43d6-b8e4-9210d8ea26e5", 00:11:17.988 "is_configured": false, 00:11:17.988 "data_offset": 0, 00:11:17.988 "data_size": 63488 00:11:17.988 }, 00:11:17.988 { 00:11:17.988 "name": "BaseBdev4", 00:11:17.988 "uuid": "9ba85835-adcc-4486-80f4-ae54c01eaef4", 00:11:17.988 "is_configured": true, 00:11:17.988 "data_offset": 2048, 00:11:17.988 "data_size": 63488 00:11:17.988 } 00:11:17.988 ] 00:11:17.988 }' 00:11:17.988 04:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.988 04:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.248 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.248 04:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.248 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:18.248 04:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.248 04:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.507 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:18.507 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:18.507 04:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.507 04:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.507 [2024-11-21 04:09:18.244546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:18.507 04:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.507 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:18.507 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.507 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.507 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.507 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.507 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.507 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.507 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.507 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.507 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.507 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.507 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.507 04:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.507 04:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.507 04:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.507 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.507 "name": "Existed_Raid", 00:11:18.507 "uuid": "f3db47a5-008f-493f-8b83-7923e96e3fb6", 00:11:18.507 "strip_size_kb": 0, 00:11:18.507 "state": "configuring", 00:11:18.507 "raid_level": "raid1", 00:11:18.507 "superblock": true, 00:11:18.507 "num_base_bdevs": 4, 00:11:18.507 "num_base_bdevs_discovered": 3, 00:11:18.507 "num_base_bdevs_operational": 4, 00:11:18.507 "base_bdevs_list": [ 00:11:18.507 { 00:11:18.507 "name": "BaseBdev1", 00:11:18.507 "uuid": "70e81ffc-8ea9-43ac-a4fa-6f7fe97b2e8e", 00:11:18.507 "is_configured": true, 00:11:18.507 "data_offset": 2048, 00:11:18.507 "data_size": 63488 00:11:18.507 }, 00:11:18.507 { 00:11:18.507 "name": null, 00:11:18.507 "uuid": "7689aa42-83f4-4b66-98f6-1feed79b4081", 00:11:18.507 "is_configured": false, 00:11:18.507 "data_offset": 0, 00:11:18.507 "data_size": 63488 00:11:18.507 }, 00:11:18.507 { 00:11:18.507 "name": "BaseBdev3", 00:11:18.507 "uuid": "52091b79-7036-43d6-b8e4-9210d8ea26e5", 00:11:18.507 "is_configured": true, 00:11:18.507 "data_offset": 2048, 00:11:18.507 "data_size": 63488 00:11:18.507 }, 00:11:18.507 { 00:11:18.507 "name": "BaseBdev4", 00:11:18.507 "uuid": "9ba85835-adcc-4486-80f4-ae54c01eaef4", 00:11:18.507 "is_configured": true, 00:11:18.507 "data_offset": 2048, 00:11:18.507 "data_size": 63488 00:11:18.507 } 00:11:18.507 ] 00:11:18.507 }' 00:11:18.507 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.507 04:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.767 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.767 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:18.767 04:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.767 04:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.767 04:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.027 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:19.027 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:19.027 04:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.027 04:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.027 [2024-11-21 04:09:18.755925] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:19.027 04:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.027 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:19.027 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.027 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.027 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.027 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.027 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.027 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.027 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.027 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.027 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.027 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.027 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.027 04:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.027 04:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.027 04:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.027 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.027 "name": "Existed_Raid", 00:11:19.027 "uuid": "f3db47a5-008f-493f-8b83-7923e96e3fb6", 00:11:19.027 "strip_size_kb": 0, 00:11:19.027 "state": "configuring", 00:11:19.027 "raid_level": "raid1", 00:11:19.027 "superblock": true, 00:11:19.027 "num_base_bdevs": 4, 00:11:19.027 "num_base_bdevs_discovered": 2, 00:11:19.027 "num_base_bdevs_operational": 4, 00:11:19.027 "base_bdevs_list": [ 00:11:19.027 { 00:11:19.027 "name": null, 00:11:19.027 "uuid": "70e81ffc-8ea9-43ac-a4fa-6f7fe97b2e8e", 00:11:19.027 "is_configured": false, 00:11:19.027 "data_offset": 0, 00:11:19.027 "data_size": 63488 00:11:19.027 }, 00:11:19.027 { 00:11:19.027 "name": null, 00:11:19.027 "uuid": "7689aa42-83f4-4b66-98f6-1feed79b4081", 00:11:19.027 "is_configured": false, 00:11:19.027 "data_offset": 0, 00:11:19.027 "data_size": 63488 00:11:19.027 }, 00:11:19.027 { 00:11:19.027 "name": "BaseBdev3", 00:11:19.027 "uuid": "52091b79-7036-43d6-b8e4-9210d8ea26e5", 00:11:19.027 "is_configured": true, 00:11:19.027 "data_offset": 2048, 00:11:19.027 "data_size": 63488 00:11:19.027 }, 00:11:19.027 { 00:11:19.027 "name": "BaseBdev4", 00:11:19.027 "uuid": "9ba85835-adcc-4486-80f4-ae54c01eaef4", 00:11:19.027 "is_configured": true, 00:11:19.027 "data_offset": 2048, 00:11:19.027 "data_size": 63488 00:11:19.027 } 00:11:19.027 ] 00:11:19.027 }' 00:11:19.027 04:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.027 04:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.287 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:19.287 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.287 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.287 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.287 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.287 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:19.287 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:19.287 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.287 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.287 [2024-11-21 04:09:19.223775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:19.287 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.287 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:19.287 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.287 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.287 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.287 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.287 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.287 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.287 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.287 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.287 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.287 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.287 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.287 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.287 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.287 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.546 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.547 "name": "Existed_Raid", 00:11:19.547 "uuid": "f3db47a5-008f-493f-8b83-7923e96e3fb6", 00:11:19.547 "strip_size_kb": 0, 00:11:19.547 "state": "configuring", 00:11:19.547 "raid_level": "raid1", 00:11:19.547 "superblock": true, 00:11:19.547 "num_base_bdevs": 4, 00:11:19.547 "num_base_bdevs_discovered": 3, 00:11:19.547 "num_base_bdevs_operational": 4, 00:11:19.547 "base_bdevs_list": [ 00:11:19.547 { 00:11:19.547 "name": null, 00:11:19.547 "uuid": "70e81ffc-8ea9-43ac-a4fa-6f7fe97b2e8e", 00:11:19.547 "is_configured": false, 00:11:19.547 "data_offset": 0, 00:11:19.547 "data_size": 63488 00:11:19.547 }, 00:11:19.547 { 00:11:19.547 "name": "BaseBdev2", 00:11:19.547 "uuid": "7689aa42-83f4-4b66-98f6-1feed79b4081", 00:11:19.547 "is_configured": true, 00:11:19.547 "data_offset": 2048, 00:11:19.547 "data_size": 63488 00:11:19.547 }, 00:11:19.547 { 00:11:19.547 "name": "BaseBdev3", 00:11:19.547 "uuid": "52091b79-7036-43d6-b8e4-9210d8ea26e5", 00:11:19.547 "is_configured": true, 00:11:19.547 "data_offset": 2048, 00:11:19.547 "data_size": 63488 00:11:19.547 }, 00:11:19.547 { 00:11:19.547 "name": "BaseBdev4", 00:11:19.547 "uuid": "9ba85835-adcc-4486-80f4-ae54c01eaef4", 00:11:19.547 "is_configured": true, 00:11:19.547 "data_offset": 2048, 00:11:19.547 "data_size": 63488 00:11:19.547 } 00:11:19.547 ] 00:11:19.547 }' 00:11:19.547 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.547 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.806 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 70e81ffc-8ea9-43ac-a4fa-6f7fe97b2e8e 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.807 [2024-11-21 04:09:19.727945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:19.807 [2024-11-21 04:09:19.728307] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:11:19.807 [2024-11-21 04:09:19.728364] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:19.807 [2024-11-21 04:09:19.728671] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:11:19.807 NewBaseBdev 00:11:19.807 [2024-11-21 04:09:19.728870] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:11:19.807 [2024-11-21 04:09:19.728934] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:11:19.807 [2024-11-21 04:09:19.729124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.807 [ 00:11:19.807 { 00:11:19.807 "name": "NewBaseBdev", 00:11:19.807 "aliases": [ 00:11:19.807 "70e81ffc-8ea9-43ac-a4fa-6f7fe97b2e8e" 00:11:19.807 ], 00:11:19.807 "product_name": "Malloc disk", 00:11:19.807 "block_size": 512, 00:11:19.807 "num_blocks": 65536, 00:11:19.807 "uuid": "70e81ffc-8ea9-43ac-a4fa-6f7fe97b2e8e", 00:11:19.807 "assigned_rate_limits": { 00:11:19.807 "rw_ios_per_sec": 0, 00:11:19.807 "rw_mbytes_per_sec": 0, 00:11:19.807 "r_mbytes_per_sec": 0, 00:11:19.807 "w_mbytes_per_sec": 0 00:11:19.807 }, 00:11:19.807 "claimed": true, 00:11:19.807 "claim_type": "exclusive_write", 00:11:19.807 "zoned": false, 00:11:19.807 "supported_io_types": { 00:11:19.807 "read": true, 00:11:19.807 "write": true, 00:11:19.807 "unmap": true, 00:11:19.807 "flush": true, 00:11:19.807 "reset": true, 00:11:19.807 "nvme_admin": false, 00:11:19.807 "nvme_io": false, 00:11:19.807 "nvme_io_md": false, 00:11:19.807 "write_zeroes": true, 00:11:19.807 "zcopy": true, 00:11:19.807 "get_zone_info": false, 00:11:19.807 "zone_management": false, 00:11:19.807 "zone_append": false, 00:11:19.807 "compare": false, 00:11:19.807 "compare_and_write": false, 00:11:19.807 "abort": true, 00:11:19.807 "seek_hole": false, 00:11:19.807 "seek_data": false, 00:11:19.807 "copy": true, 00:11:19.807 "nvme_iov_md": false 00:11:19.807 }, 00:11:19.807 "memory_domains": [ 00:11:19.807 { 00:11:19.807 "dma_device_id": "system", 00:11:19.807 "dma_device_type": 1 00:11:19.807 }, 00:11:19.807 { 00:11:19.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.807 "dma_device_type": 2 00:11:19.807 } 00:11:19.807 ], 00:11:19.807 "driver_specific": {} 00:11:19.807 } 00:11:19.807 ] 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.807 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.067 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.067 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.067 "name": "Existed_Raid", 00:11:20.067 "uuid": "f3db47a5-008f-493f-8b83-7923e96e3fb6", 00:11:20.067 "strip_size_kb": 0, 00:11:20.067 "state": "online", 00:11:20.067 "raid_level": "raid1", 00:11:20.067 "superblock": true, 00:11:20.067 "num_base_bdevs": 4, 00:11:20.067 "num_base_bdevs_discovered": 4, 00:11:20.067 "num_base_bdevs_operational": 4, 00:11:20.067 "base_bdevs_list": [ 00:11:20.067 { 00:11:20.067 "name": "NewBaseBdev", 00:11:20.067 "uuid": "70e81ffc-8ea9-43ac-a4fa-6f7fe97b2e8e", 00:11:20.067 "is_configured": true, 00:11:20.067 "data_offset": 2048, 00:11:20.067 "data_size": 63488 00:11:20.067 }, 00:11:20.067 { 00:11:20.067 "name": "BaseBdev2", 00:11:20.067 "uuid": "7689aa42-83f4-4b66-98f6-1feed79b4081", 00:11:20.067 "is_configured": true, 00:11:20.067 "data_offset": 2048, 00:11:20.067 "data_size": 63488 00:11:20.067 }, 00:11:20.067 { 00:11:20.067 "name": "BaseBdev3", 00:11:20.067 "uuid": "52091b79-7036-43d6-b8e4-9210d8ea26e5", 00:11:20.067 "is_configured": true, 00:11:20.067 "data_offset": 2048, 00:11:20.067 "data_size": 63488 00:11:20.067 }, 00:11:20.067 { 00:11:20.067 "name": "BaseBdev4", 00:11:20.067 "uuid": "9ba85835-adcc-4486-80f4-ae54c01eaef4", 00:11:20.067 "is_configured": true, 00:11:20.067 "data_offset": 2048, 00:11:20.067 "data_size": 63488 00:11:20.067 } 00:11:20.067 ] 00:11:20.067 }' 00:11:20.067 04:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.067 04:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.348 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:20.348 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:20.348 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:20.348 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:20.348 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:20.348 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:20.348 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:20.348 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:20.348 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.348 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.348 [2024-11-21 04:09:20.207602] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:20.348 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.348 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:20.348 "name": "Existed_Raid", 00:11:20.348 "aliases": [ 00:11:20.348 "f3db47a5-008f-493f-8b83-7923e96e3fb6" 00:11:20.348 ], 00:11:20.348 "product_name": "Raid Volume", 00:11:20.348 "block_size": 512, 00:11:20.348 "num_blocks": 63488, 00:11:20.348 "uuid": "f3db47a5-008f-493f-8b83-7923e96e3fb6", 00:11:20.348 "assigned_rate_limits": { 00:11:20.348 "rw_ios_per_sec": 0, 00:11:20.348 "rw_mbytes_per_sec": 0, 00:11:20.348 "r_mbytes_per_sec": 0, 00:11:20.348 "w_mbytes_per_sec": 0 00:11:20.348 }, 00:11:20.348 "claimed": false, 00:11:20.348 "zoned": false, 00:11:20.348 "supported_io_types": { 00:11:20.348 "read": true, 00:11:20.348 "write": true, 00:11:20.348 "unmap": false, 00:11:20.348 "flush": false, 00:11:20.348 "reset": true, 00:11:20.348 "nvme_admin": false, 00:11:20.348 "nvme_io": false, 00:11:20.348 "nvme_io_md": false, 00:11:20.348 "write_zeroes": true, 00:11:20.348 "zcopy": false, 00:11:20.348 "get_zone_info": false, 00:11:20.348 "zone_management": false, 00:11:20.348 "zone_append": false, 00:11:20.348 "compare": false, 00:11:20.348 "compare_and_write": false, 00:11:20.348 "abort": false, 00:11:20.348 "seek_hole": false, 00:11:20.348 "seek_data": false, 00:11:20.348 "copy": false, 00:11:20.348 "nvme_iov_md": false 00:11:20.348 }, 00:11:20.348 "memory_domains": [ 00:11:20.348 { 00:11:20.348 "dma_device_id": "system", 00:11:20.348 "dma_device_type": 1 00:11:20.348 }, 00:11:20.348 { 00:11:20.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.348 "dma_device_type": 2 00:11:20.348 }, 00:11:20.348 { 00:11:20.348 "dma_device_id": "system", 00:11:20.348 "dma_device_type": 1 00:11:20.348 }, 00:11:20.348 { 00:11:20.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.348 "dma_device_type": 2 00:11:20.348 }, 00:11:20.348 { 00:11:20.348 "dma_device_id": "system", 00:11:20.348 "dma_device_type": 1 00:11:20.348 }, 00:11:20.348 { 00:11:20.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.348 "dma_device_type": 2 00:11:20.348 }, 00:11:20.348 { 00:11:20.348 "dma_device_id": "system", 00:11:20.348 "dma_device_type": 1 00:11:20.348 }, 00:11:20.348 { 00:11:20.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.348 "dma_device_type": 2 00:11:20.348 } 00:11:20.348 ], 00:11:20.348 "driver_specific": { 00:11:20.348 "raid": { 00:11:20.348 "uuid": "f3db47a5-008f-493f-8b83-7923e96e3fb6", 00:11:20.348 "strip_size_kb": 0, 00:11:20.348 "state": "online", 00:11:20.348 "raid_level": "raid1", 00:11:20.348 "superblock": true, 00:11:20.348 "num_base_bdevs": 4, 00:11:20.348 "num_base_bdevs_discovered": 4, 00:11:20.348 "num_base_bdevs_operational": 4, 00:11:20.348 "base_bdevs_list": [ 00:11:20.348 { 00:11:20.348 "name": "NewBaseBdev", 00:11:20.348 "uuid": "70e81ffc-8ea9-43ac-a4fa-6f7fe97b2e8e", 00:11:20.348 "is_configured": true, 00:11:20.348 "data_offset": 2048, 00:11:20.348 "data_size": 63488 00:11:20.348 }, 00:11:20.348 { 00:11:20.348 "name": "BaseBdev2", 00:11:20.348 "uuid": "7689aa42-83f4-4b66-98f6-1feed79b4081", 00:11:20.348 "is_configured": true, 00:11:20.348 "data_offset": 2048, 00:11:20.348 "data_size": 63488 00:11:20.348 }, 00:11:20.348 { 00:11:20.348 "name": "BaseBdev3", 00:11:20.348 "uuid": "52091b79-7036-43d6-b8e4-9210d8ea26e5", 00:11:20.348 "is_configured": true, 00:11:20.348 "data_offset": 2048, 00:11:20.348 "data_size": 63488 00:11:20.348 }, 00:11:20.348 { 00:11:20.348 "name": "BaseBdev4", 00:11:20.348 "uuid": "9ba85835-adcc-4486-80f4-ae54c01eaef4", 00:11:20.348 "is_configured": true, 00:11:20.348 "data_offset": 2048, 00:11:20.348 "data_size": 63488 00:11:20.348 } 00:11:20.348 ] 00:11:20.348 } 00:11:20.348 } 00:11:20.348 }' 00:11:20.348 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:20.348 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:20.348 BaseBdev2 00:11:20.348 BaseBdev3 00:11:20.349 BaseBdev4' 00:11:20.349 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.610 [2024-11-21 04:09:20.542648] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:20.610 [2024-11-21 04:09:20.542738] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:20.610 [2024-11-21 04:09:20.542861] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:20.610 [2024-11-21 04:09:20.543215] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:20.610 [2024-11-21 04:09:20.543290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84661 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84661 ']' 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 84661 00:11:20.610 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:20.611 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:20.611 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84661 00:11:20.870 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:20.870 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:20.870 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84661' 00:11:20.870 killing process with pid 84661 00:11:20.870 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 84661 00:11:20.870 [2024-11-21 04:09:20.590499] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:20.870 04:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 84661 00:11:20.870 [2024-11-21 04:09:20.669607] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:21.130 ************************************ 00:11:21.130 END TEST raid_state_function_test_sb 00:11:21.130 ************************************ 00:11:21.130 04:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:21.130 00:11:21.130 real 0m9.796s 00:11:21.130 user 0m16.356s 00:11:21.130 sys 0m2.219s 00:11:21.130 04:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.130 04:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.130 04:09:21 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:21.130 04:09:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:21.130 04:09:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.130 04:09:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:21.130 ************************************ 00:11:21.130 START TEST raid_superblock_test 00:11:21.130 ************************************ 00:11:21.130 04:09:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:11:21.130 04:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:21.130 04:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:21.130 04:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:21.130 04:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:21.130 04:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:21.130 04:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:21.130 04:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:21.130 04:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:21.130 04:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:21.130 04:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:21.130 04:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:21.130 04:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:21.130 04:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:21.130 04:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:21.130 04:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:21.130 04:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85309 00:11:21.130 04:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:21.130 04:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85309 00:11:21.130 04:09:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 85309 ']' 00:11:21.130 04:09:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.131 04:09:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.131 04:09:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.131 04:09:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.131 04:09:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.390 [2024-11-21 04:09:21.173487] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:11:21.390 [2024-11-21 04:09:21.173665] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85309 ] 00:11:21.390 [2024-11-21 04:09:21.333325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.650 [2024-11-21 04:09:21.378203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.650 [2024-11-21 04:09:21.456452] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.650 [2024-11-21 04:09:21.456503] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.221 malloc1 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.221 [2024-11-21 04:09:22.060784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:22.221 [2024-11-21 04:09:22.060945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.221 [2024-11-21 04:09:22.060986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:22.221 [2024-11-21 04:09:22.061026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.221 [2024-11-21 04:09:22.063858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.221 [2024-11-21 04:09:22.063943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:22.221 pt1 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.221 malloc2 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.221 [2024-11-21 04:09:22.100137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:22.221 [2024-11-21 04:09:22.100283] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.221 [2024-11-21 04:09:22.100322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:22.221 [2024-11-21 04:09:22.100368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.221 [2024-11-21 04:09:22.103174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.221 [2024-11-21 04:09:22.103269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:22.221 pt2 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.221 malloc3 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.221 [2024-11-21 04:09:22.135450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:22.221 [2024-11-21 04:09:22.135593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.221 [2024-11-21 04:09:22.135638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:22.221 [2024-11-21 04:09:22.135678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.221 [2024-11-21 04:09:22.138451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.221 [2024-11-21 04:09:22.138534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:22.221 pt3 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.221 malloc4 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.221 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:22.222 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.222 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.222 [2024-11-21 04:09:22.183525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:22.222 [2024-11-21 04:09:22.183666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.222 [2024-11-21 04:09:22.183713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:22.222 [2024-11-21 04:09:22.183749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.222 [2024-11-21 04:09:22.186482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.222 [2024-11-21 04:09:22.186563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:22.222 pt4 00:11:22.222 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.222 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:22.222 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:22.222 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:22.222 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.222 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.482 [2024-11-21 04:09:22.195545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:22.482 [2024-11-21 04:09:22.197893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:22.482 [2024-11-21 04:09:22.198031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:22.482 [2024-11-21 04:09:22.198094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:22.482 [2024-11-21 04:09:22.198315] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:22.482 [2024-11-21 04:09:22.198334] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:22.482 [2024-11-21 04:09:22.198618] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:11:22.482 [2024-11-21 04:09:22.198789] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:22.482 [2024-11-21 04:09:22.198800] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:22.482 [2024-11-21 04:09:22.198951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.482 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.482 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:22.482 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:22.482 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.482 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.482 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.482 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.482 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.482 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.482 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.482 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.482 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.482 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.482 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.482 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.482 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.482 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.482 "name": "raid_bdev1", 00:11:22.482 "uuid": "7067d5b5-8d19-4002-9f55-0734bc231e13", 00:11:22.482 "strip_size_kb": 0, 00:11:22.482 "state": "online", 00:11:22.482 "raid_level": "raid1", 00:11:22.482 "superblock": true, 00:11:22.482 "num_base_bdevs": 4, 00:11:22.482 "num_base_bdevs_discovered": 4, 00:11:22.482 "num_base_bdevs_operational": 4, 00:11:22.482 "base_bdevs_list": [ 00:11:22.482 { 00:11:22.482 "name": "pt1", 00:11:22.482 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:22.482 "is_configured": true, 00:11:22.482 "data_offset": 2048, 00:11:22.482 "data_size": 63488 00:11:22.482 }, 00:11:22.482 { 00:11:22.482 "name": "pt2", 00:11:22.482 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:22.482 "is_configured": true, 00:11:22.482 "data_offset": 2048, 00:11:22.482 "data_size": 63488 00:11:22.482 }, 00:11:22.482 { 00:11:22.482 "name": "pt3", 00:11:22.482 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:22.482 "is_configured": true, 00:11:22.482 "data_offset": 2048, 00:11:22.482 "data_size": 63488 00:11:22.482 }, 00:11:22.482 { 00:11:22.482 "name": "pt4", 00:11:22.482 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:22.482 "is_configured": true, 00:11:22.482 "data_offset": 2048, 00:11:22.482 "data_size": 63488 00:11:22.482 } 00:11:22.482 ] 00:11:22.482 }' 00:11:22.482 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.482 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.742 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:22.742 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:22.742 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:22.742 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:22.742 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:22.742 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:22.742 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:22.742 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:22.742 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.742 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.742 [2024-11-21 04:09:22.699033] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.002 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.002 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:23.002 "name": "raid_bdev1", 00:11:23.002 "aliases": [ 00:11:23.002 "7067d5b5-8d19-4002-9f55-0734bc231e13" 00:11:23.002 ], 00:11:23.002 "product_name": "Raid Volume", 00:11:23.002 "block_size": 512, 00:11:23.002 "num_blocks": 63488, 00:11:23.002 "uuid": "7067d5b5-8d19-4002-9f55-0734bc231e13", 00:11:23.002 "assigned_rate_limits": { 00:11:23.002 "rw_ios_per_sec": 0, 00:11:23.002 "rw_mbytes_per_sec": 0, 00:11:23.002 "r_mbytes_per_sec": 0, 00:11:23.002 "w_mbytes_per_sec": 0 00:11:23.002 }, 00:11:23.002 "claimed": false, 00:11:23.002 "zoned": false, 00:11:23.002 "supported_io_types": { 00:11:23.002 "read": true, 00:11:23.002 "write": true, 00:11:23.002 "unmap": false, 00:11:23.002 "flush": false, 00:11:23.002 "reset": true, 00:11:23.002 "nvme_admin": false, 00:11:23.002 "nvme_io": false, 00:11:23.002 "nvme_io_md": false, 00:11:23.002 "write_zeroes": true, 00:11:23.002 "zcopy": false, 00:11:23.002 "get_zone_info": false, 00:11:23.002 "zone_management": false, 00:11:23.002 "zone_append": false, 00:11:23.002 "compare": false, 00:11:23.002 "compare_and_write": false, 00:11:23.002 "abort": false, 00:11:23.002 "seek_hole": false, 00:11:23.002 "seek_data": false, 00:11:23.002 "copy": false, 00:11:23.002 "nvme_iov_md": false 00:11:23.002 }, 00:11:23.002 "memory_domains": [ 00:11:23.002 { 00:11:23.002 "dma_device_id": "system", 00:11:23.002 "dma_device_type": 1 00:11:23.002 }, 00:11:23.002 { 00:11:23.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.002 "dma_device_type": 2 00:11:23.002 }, 00:11:23.002 { 00:11:23.002 "dma_device_id": "system", 00:11:23.002 "dma_device_type": 1 00:11:23.002 }, 00:11:23.002 { 00:11:23.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.002 "dma_device_type": 2 00:11:23.002 }, 00:11:23.002 { 00:11:23.002 "dma_device_id": "system", 00:11:23.002 "dma_device_type": 1 00:11:23.002 }, 00:11:23.002 { 00:11:23.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.002 "dma_device_type": 2 00:11:23.002 }, 00:11:23.002 { 00:11:23.002 "dma_device_id": "system", 00:11:23.002 "dma_device_type": 1 00:11:23.002 }, 00:11:23.002 { 00:11:23.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.002 "dma_device_type": 2 00:11:23.002 } 00:11:23.002 ], 00:11:23.002 "driver_specific": { 00:11:23.002 "raid": { 00:11:23.002 "uuid": "7067d5b5-8d19-4002-9f55-0734bc231e13", 00:11:23.002 "strip_size_kb": 0, 00:11:23.002 "state": "online", 00:11:23.002 "raid_level": "raid1", 00:11:23.002 "superblock": true, 00:11:23.002 "num_base_bdevs": 4, 00:11:23.002 "num_base_bdevs_discovered": 4, 00:11:23.003 "num_base_bdevs_operational": 4, 00:11:23.003 "base_bdevs_list": [ 00:11:23.003 { 00:11:23.003 "name": "pt1", 00:11:23.003 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:23.003 "is_configured": true, 00:11:23.003 "data_offset": 2048, 00:11:23.003 "data_size": 63488 00:11:23.003 }, 00:11:23.003 { 00:11:23.003 "name": "pt2", 00:11:23.003 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:23.003 "is_configured": true, 00:11:23.003 "data_offset": 2048, 00:11:23.003 "data_size": 63488 00:11:23.003 }, 00:11:23.003 { 00:11:23.003 "name": "pt3", 00:11:23.003 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:23.003 "is_configured": true, 00:11:23.003 "data_offset": 2048, 00:11:23.003 "data_size": 63488 00:11:23.003 }, 00:11:23.003 { 00:11:23.003 "name": "pt4", 00:11:23.003 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:23.003 "is_configured": true, 00:11:23.003 "data_offset": 2048, 00:11:23.003 "data_size": 63488 00:11:23.003 } 00:11:23.003 ] 00:11:23.003 } 00:11:23.003 } 00:11:23.003 }' 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:23.003 pt2 00:11:23.003 pt3 00:11:23.003 pt4' 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.003 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.264 04:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.264 [2024-11-21 04:09:23.018386] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7067d5b5-8d19-4002-9f55-0734bc231e13 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7067d5b5-8d19-4002-9f55-0734bc231e13 ']' 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.264 [2024-11-21 04:09:23.061984] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:23.264 [2024-11-21 04:09:23.062062] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.264 [2024-11-21 04:09:23.062168] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.264 [2024-11-21 04:09:23.062293] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.264 [2024-11-21 04:09:23.062311] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.264 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.264 [2024-11-21 04:09:23.217731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:23.264 [2024-11-21 04:09:23.219951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:23.264 [2024-11-21 04:09:23.219998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:23.264 [2024-11-21 04:09:23.220038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:23.264 [2024-11-21 04:09:23.220090] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:23.264 [2024-11-21 04:09:23.220133] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:23.264 [2024-11-21 04:09:23.220152] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:23.264 [2024-11-21 04:09:23.220170] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:23.264 [2024-11-21 04:09:23.220185] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:23.264 [2024-11-21 04:09:23.220195] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:11:23.264 request: 00:11:23.264 { 00:11:23.264 "name": "raid_bdev1", 00:11:23.264 "raid_level": "raid1", 00:11:23.264 "base_bdevs": [ 00:11:23.264 "malloc1", 00:11:23.264 "malloc2", 00:11:23.264 "malloc3", 00:11:23.264 "malloc4" 00:11:23.264 ], 00:11:23.264 "superblock": false, 00:11:23.264 "method": "bdev_raid_create", 00:11:23.264 "req_id": 1 00:11:23.264 } 00:11:23.264 Got JSON-RPC error response 00:11:23.264 response: 00:11:23.264 { 00:11:23.264 "code": -17, 00:11:23.264 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:23.264 } 00:11:23.265 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:23.265 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:23.265 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:23.265 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:23.265 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:23.265 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:23.265 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.265 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.265 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.525 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.525 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:23.525 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:23.525 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:23.525 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.525 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.525 [2024-11-21 04:09:23.273628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:23.525 [2024-11-21 04:09:23.273680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.525 [2024-11-21 04:09:23.273705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:23.525 [2024-11-21 04:09:23.273714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.525 [2024-11-21 04:09:23.276366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.525 [2024-11-21 04:09:23.276400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:23.525 [2024-11-21 04:09:23.276481] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:23.525 [2024-11-21 04:09:23.276525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:23.525 pt1 00:11:23.525 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.525 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:23.525 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.525 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.525 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.525 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.525 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.525 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.525 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.525 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.525 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.525 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.525 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.525 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.525 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.525 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.525 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.525 "name": "raid_bdev1", 00:11:23.525 "uuid": "7067d5b5-8d19-4002-9f55-0734bc231e13", 00:11:23.525 "strip_size_kb": 0, 00:11:23.525 "state": "configuring", 00:11:23.525 "raid_level": "raid1", 00:11:23.525 "superblock": true, 00:11:23.525 "num_base_bdevs": 4, 00:11:23.525 "num_base_bdevs_discovered": 1, 00:11:23.525 "num_base_bdevs_operational": 4, 00:11:23.525 "base_bdevs_list": [ 00:11:23.525 { 00:11:23.525 "name": "pt1", 00:11:23.525 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:23.525 "is_configured": true, 00:11:23.525 "data_offset": 2048, 00:11:23.525 "data_size": 63488 00:11:23.525 }, 00:11:23.525 { 00:11:23.525 "name": null, 00:11:23.525 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:23.525 "is_configured": false, 00:11:23.525 "data_offset": 2048, 00:11:23.525 "data_size": 63488 00:11:23.525 }, 00:11:23.525 { 00:11:23.525 "name": null, 00:11:23.525 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:23.525 "is_configured": false, 00:11:23.525 "data_offset": 2048, 00:11:23.525 "data_size": 63488 00:11:23.525 }, 00:11:23.525 { 00:11:23.525 "name": null, 00:11:23.525 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:23.525 "is_configured": false, 00:11:23.525 "data_offset": 2048, 00:11:23.525 "data_size": 63488 00:11:23.525 } 00:11:23.525 ] 00:11:23.525 }' 00:11:23.525 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.525 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.786 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:23.786 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:23.786 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.786 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.786 [2024-11-21 04:09:23.712886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:23.786 [2024-11-21 04:09:23.712957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.786 [2024-11-21 04:09:23.712982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:23.786 [2024-11-21 04:09:23.712992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.786 [2024-11-21 04:09:23.713530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.786 [2024-11-21 04:09:23.713560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:23.786 [2024-11-21 04:09:23.713651] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:23.786 [2024-11-21 04:09:23.713676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:23.786 pt2 00:11:23.786 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.786 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:23.786 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.786 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.786 [2024-11-21 04:09:23.724876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:23.786 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.786 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:23.786 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.786 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.786 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.786 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.786 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.786 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.786 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.786 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.786 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.786 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.786 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.786 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.786 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.046 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.046 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.046 "name": "raid_bdev1", 00:11:24.046 "uuid": "7067d5b5-8d19-4002-9f55-0734bc231e13", 00:11:24.046 "strip_size_kb": 0, 00:11:24.046 "state": "configuring", 00:11:24.046 "raid_level": "raid1", 00:11:24.046 "superblock": true, 00:11:24.046 "num_base_bdevs": 4, 00:11:24.046 "num_base_bdevs_discovered": 1, 00:11:24.046 "num_base_bdevs_operational": 4, 00:11:24.046 "base_bdevs_list": [ 00:11:24.046 { 00:11:24.046 "name": "pt1", 00:11:24.046 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:24.046 "is_configured": true, 00:11:24.046 "data_offset": 2048, 00:11:24.046 "data_size": 63488 00:11:24.046 }, 00:11:24.046 { 00:11:24.046 "name": null, 00:11:24.046 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:24.046 "is_configured": false, 00:11:24.046 "data_offset": 0, 00:11:24.046 "data_size": 63488 00:11:24.046 }, 00:11:24.046 { 00:11:24.046 "name": null, 00:11:24.046 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:24.046 "is_configured": false, 00:11:24.046 "data_offset": 2048, 00:11:24.046 "data_size": 63488 00:11:24.046 }, 00:11:24.046 { 00:11:24.046 "name": null, 00:11:24.046 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:24.046 "is_configured": false, 00:11:24.046 "data_offset": 2048, 00:11:24.046 "data_size": 63488 00:11:24.046 } 00:11:24.046 ] 00:11:24.046 }' 00:11:24.046 04:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.046 04:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.307 [2024-11-21 04:09:24.180121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:24.307 [2024-11-21 04:09:24.180255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.307 [2024-11-21 04:09:24.180315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:24.307 [2024-11-21 04:09:24.180355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.307 [2024-11-21 04:09:24.180856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.307 [2024-11-21 04:09:24.180916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:24.307 [2024-11-21 04:09:24.181034] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:24.307 [2024-11-21 04:09:24.181087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:24.307 pt2 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.307 [2024-11-21 04:09:24.192055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:24.307 [2024-11-21 04:09:24.192101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.307 [2024-11-21 04:09:24.192116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:24.307 [2024-11-21 04:09:24.192127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.307 [2024-11-21 04:09:24.192496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.307 [2024-11-21 04:09:24.192522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:24.307 [2024-11-21 04:09:24.192589] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:24.307 [2024-11-21 04:09:24.192609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:24.307 pt3 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.307 [2024-11-21 04:09:24.204041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:24.307 [2024-11-21 04:09:24.204098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.307 [2024-11-21 04:09:24.204113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:24.307 [2024-11-21 04:09:24.204123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.307 [2024-11-21 04:09:24.204434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.307 [2024-11-21 04:09:24.204458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:24.307 [2024-11-21 04:09:24.204509] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:24.307 [2024-11-21 04:09:24.204544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:24.307 [2024-11-21 04:09:24.204658] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:11:24.307 [2024-11-21 04:09:24.204671] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:24.307 [2024-11-21 04:09:24.204906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:11:24.307 [2024-11-21 04:09:24.205042] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:11:24.307 [2024-11-21 04:09:24.205052] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:11:24.307 [2024-11-21 04:09:24.205157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.307 pt4 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.307 "name": "raid_bdev1", 00:11:24.307 "uuid": "7067d5b5-8d19-4002-9f55-0734bc231e13", 00:11:24.307 "strip_size_kb": 0, 00:11:24.307 "state": "online", 00:11:24.307 "raid_level": "raid1", 00:11:24.307 "superblock": true, 00:11:24.307 "num_base_bdevs": 4, 00:11:24.307 "num_base_bdevs_discovered": 4, 00:11:24.307 "num_base_bdevs_operational": 4, 00:11:24.307 "base_bdevs_list": [ 00:11:24.307 { 00:11:24.307 "name": "pt1", 00:11:24.307 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:24.307 "is_configured": true, 00:11:24.307 "data_offset": 2048, 00:11:24.307 "data_size": 63488 00:11:24.307 }, 00:11:24.307 { 00:11:24.307 "name": "pt2", 00:11:24.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:24.307 "is_configured": true, 00:11:24.307 "data_offset": 2048, 00:11:24.307 "data_size": 63488 00:11:24.307 }, 00:11:24.307 { 00:11:24.307 "name": "pt3", 00:11:24.307 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:24.307 "is_configured": true, 00:11:24.307 "data_offset": 2048, 00:11:24.307 "data_size": 63488 00:11:24.307 }, 00:11:24.307 { 00:11:24.307 "name": "pt4", 00:11:24.307 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:24.307 "is_configured": true, 00:11:24.307 "data_offset": 2048, 00:11:24.307 "data_size": 63488 00:11:24.307 } 00:11:24.307 ] 00:11:24.307 }' 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.307 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.879 [2024-11-21 04:09:24.647728] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:24.879 "name": "raid_bdev1", 00:11:24.879 "aliases": [ 00:11:24.879 "7067d5b5-8d19-4002-9f55-0734bc231e13" 00:11:24.879 ], 00:11:24.879 "product_name": "Raid Volume", 00:11:24.879 "block_size": 512, 00:11:24.879 "num_blocks": 63488, 00:11:24.879 "uuid": "7067d5b5-8d19-4002-9f55-0734bc231e13", 00:11:24.879 "assigned_rate_limits": { 00:11:24.879 "rw_ios_per_sec": 0, 00:11:24.879 "rw_mbytes_per_sec": 0, 00:11:24.879 "r_mbytes_per_sec": 0, 00:11:24.879 "w_mbytes_per_sec": 0 00:11:24.879 }, 00:11:24.879 "claimed": false, 00:11:24.879 "zoned": false, 00:11:24.879 "supported_io_types": { 00:11:24.879 "read": true, 00:11:24.879 "write": true, 00:11:24.879 "unmap": false, 00:11:24.879 "flush": false, 00:11:24.879 "reset": true, 00:11:24.879 "nvme_admin": false, 00:11:24.879 "nvme_io": false, 00:11:24.879 "nvme_io_md": false, 00:11:24.879 "write_zeroes": true, 00:11:24.879 "zcopy": false, 00:11:24.879 "get_zone_info": false, 00:11:24.879 "zone_management": false, 00:11:24.879 "zone_append": false, 00:11:24.879 "compare": false, 00:11:24.879 "compare_and_write": false, 00:11:24.879 "abort": false, 00:11:24.879 "seek_hole": false, 00:11:24.879 "seek_data": false, 00:11:24.879 "copy": false, 00:11:24.879 "nvme_iov_md": false 00:11:24.879 }, 00:11:24.879 "memory_domains": [ 00:11:24.879 { 00:11:24.879 "dma_device_id": "system", 00:11:24.879 "dma_device_type": 1 00:11:24.879 }, 00:11:24.879 { 00:11:24.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.879 "dma_device_type": 2 00:11:24.879 }, 00:11:24.879 { 00:11:24.879 "dma_device_id": "system", 00:11:24.879 "dma_device_type": 1 00:11:24.879 }, 00:11:24.879 { 00:11:24.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.879 "dma_device_type": 2 00:11:24.879 }, 00:11:24.879 { 00:11:24.879 "dma_device_id": "system", 00:11:24.879 "dma_device_type": 1 00:11:24.879 }, 00:11:24.879 { 00:11:24.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.879 "dma_device_type": 2 00:11:24.879 }, 00:11:24.879 { 00:11:24.879 "dma_device_id": "system", 00:11:24.879 "dma_device_type": 1 00:11:24.879 }, 00:11:24.879 { 00:11:24.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.879 "dma_device_type": 2 00:11:24.879 } 00:11:24.879 ], 00:11:24.879 "driver_specific": { 00:11:24.879 "raid": { 00:11:24.879 "uuid": "7067d5b5-8d19-4002-9f55-0734bc231e13", 00:11:24.879 "strip_size_kb": 0, 00:11:24.879 "state": "online", 00:11:24.879 "raid_level": "raid1", 00:11:24.879 "superblock": true, 00:11:24.879 "num_base_bdevs": 4, 00:11:24.879 "num_base_bdevs_discovered": 4, 00:11:24.879 "num_base_bdevs_operational": 4, 00:11:24.879 "base_bdevs_list": [ 00:11:24.879 { 00:11:24.879 "name": "pt1", 00:11:24.879 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:24.879 "is_configured": true, 00:11:24.879 "data_offset": 2048, 00:11:24.879 "data_size": 63488 00:11:24.879 }, 00:11:24.879 { 00:11:24.879 "name": "pt2", 00:11:24.879 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:24.879 "is_configured": true, 00:11:24.879 "data_offset": 2048, 00:11:24.879 "data_size": 63488 00:11:24.879 }, 00:11:24.879 { 00:11:24.879 "name": "pt3", 00:11:24.879 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:24.879 "is_configured": true, 00:11:24.879 "data_offset": 2048, 00:11:24.879 "data_size": 63488 00:11:24.879 }, 00:11:24.879 { 00:11:24.879 "name": "pt4", 00:11:24.879 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:24.879 "is_configured": true, 00:11:24.879 "data_offset": 2048, 00:11:24.879 "data_size": 63488 00:11:24.879 } 00:11:24.879 ] 00:11:24.879 } 00:11:24.879 } 00:11:24.879 }' 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:24.879 pt2 00:11:24.879 pt3 00:11:24.879 pt4' 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:24.879 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.880 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.880 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.880 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.140 [2024-11-21 04:09:24.951094] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7067d5b5-8d19-4002-9f55-0734bc231e13 '!=' 7067d5b5-8d19-4002-9f55-0734bc231e13 ']' 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.140 [2024-11-21 04:09:24.990778] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.140 04:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.140 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.140 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.140 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.140 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.140 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.140 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.140 "name": "raid_bdev1", 00:11:25.140 "uuid": "7067d5b5-8d19-4002-9f55-0734bc231e13", 00:11:25.140 "strip_size_kb": 0, 00:11:25.140 "state": "online", 00:11:25.140 "raid_level": "raid1", 00:11:25.140 "superblock": true, 00:11:25.140 "num_base_bdevs": 4, 00:11:25.140 "num_base_bdevs_discovered": 3, 00:11:25.140 "num_base_bdevs_operational": 3, 00:11:25.140 "base_bdevs_list": [ 00:11:25.140 { 00:11:25.140 "name": null, 00:11:25.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.140 "is_configured": false, 00:11:25.140 "data_offset": 0, 00:11:25.140 "data_size": 63488 00:11:25.140 }, 00:11:25.140 { 00:11:25.140 "name": "pt2", 00:11:25.140 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:25.140 "is_configured": true, 00:11:25.141 "data_offset": 2048, 00:11:25.141 "data_size": 63488 00:11:25.141 }, 00:11:25.141 { 00:11:25.141 "name": "pt3", 00:11:25.141 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:25.141 "is_configured": true, 00:11:25.141 "data_offset": 2048, 00:11:25.141 "data_size": 63488 00:11:25.141 }, 00:11:25.141 { 00:11:25.141 "name": "pt4", 00:11:25.141 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:25.141 "is_configured": true, 00:11:25.141 "data_offset": 2048, 00:11:25.141 "data_size": 63488 00:11:25.141 } 00:11:25.141 ] 00:11:25.141 }' 00:11:25.141 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.141 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.712 [2024-11-21 04:09:25.406058] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:25.712 [2024-11-21 04:09:25.406155] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:25.712 [2024-11-21 04:09:25.406353] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.712 [2024-11-21 04:09:25.406480] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:25.712 [2024-11-21 04:09:25.406530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.712 [2024-11-21 04:09:25.501834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:25.712 [2024-11-21 04:09:25.501947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.712 [2024-11-21 04:09:25.501969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:25.712 [2024-11-21 04:09:25.501981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.712 [2024-11-21 04:09:25.504476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.712 [2024-11-21 04:09:25.504512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:25.712 [2024-11-21 04:09:25.504586] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:25.712 [2024-11-21 04:09:25.504623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:25.712 pt2 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.712 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.713 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.713 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.713 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.713 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.713 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.713 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.713 "name": "raid_bdev1", 00:11:25.713 "uuid": "7067d5b5-8d19-4002-9f55-0734bc231e13", 00:11:25.713 "strip_size_kb": 0, 00:11:25.713 "state": "configuring", 00:11:25.713 "raid_level": "raid1", 00:11:25.713 "superblock": true, 00:11:25.713 "num_base_bdevs": 4, 00:11:25.713 "num_base_bdevs_discovered": 1, 00:11:25.713 "num_base_bdevs_operational": 3, 00:11:25.713 "base_bdevs_list": [ 00:11:25.713 { 00:11:25.713 "name": null, 00:11:25.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.713 "is_configured": false, 00:11:25.713 "data_offset": 2048, 00:11:25.713 "data_size": 63488 00:11:25.713 }, 00:11:25.713 { 00:11:25.713 "name": "pt2", 00:11:25.713 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:25.713 "is_configured": true, 00:11:25.713 "data_offset": 2048, 00:11:25.713 "data_size": 63488 00:11:25.713 }, 00:11:25.713 { 00:11:25.713 "name": null, 00:11:25.713 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:25.713 "is_configured": false, 00:11:25.713 "data_offset": 2048, 00:11:25.713 "data_size": 63488 00:11:25.713 }, 00:11:25.713 { 00:11:25.713 "name": null, 00:11:25.713 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:25.713 "is_configured": false, 00:11:25.713 "data_offset": 2048, 00:11:25.713 "data_size": 63488 00:11:25.713 } 00:11:25.713 ] 00:11:25.713 }' 00:11:25.713 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.713 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.293 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:26.293 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:26.293 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:26.293 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.293 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.293 [2024-11-21 04:09:25.961176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:26.293 [2024-11-21 04:09:25.961379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.293 [2024-11-21 04:09:25.961431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:26.293 [2024-11-21 04:09:25.961475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.293 [2024-11-21 04:09:25.961989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.293 [2024-11-21 04:09:25.962052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:26.293 [2024-11-21 04:09:25.962189] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:26.293 [2024-11-21 04:09:25.962278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:26.293 pt3 00:11:26.293 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.293 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:26.293 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.293 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.293 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.293 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.293 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.293 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.293 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.293 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.293 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.293 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.293 04:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.293 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.293 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.293 04:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.293 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.293 "name": "raid_bdev1", 00:11:26.293 "uuid": "7067d5b5-8d19-4002-9f55-0734bc231e13", 00:11:26.293 "strip_size_kb": 0, 00:11:26.293 "state": "configuring", 00:11:26.293 "raid_level": "raid1", 00:11:26.293 "superblock": true, 00:11:26.293 "num_base_bdevs": 4, 00:11:26.293 "num_base_bdevs_discovered": 2, 00:11:26.293 "num_base_bdevs_operational": 3, 00:11:26.293 "base_bdevs_list": [ 00:11:26.293 { 00:11:26.293 "name": null, 00:11:26.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.293 "is_configured": false, 00:11:26.293 "data_offset": 2048, 00:11:26.293 "data_size": 63488 00:11:26.293 }, 00:11:26.293 { 00:11:26.294 "name": "pt2", 00:11:26.294 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.294 "is_configured": true, 00:11:26.294 "data_offset": 2048, 00:11:26.294 "data_size": 63488 00:11:26.294 }, 00:11:26.294 { 00:11:26.294 "name": "pt3", 00:11:26.294 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:26.294 "is_configured": true, 00:11:26.294 "data_offset": 2048, 00:11:26.294 "data_size": 63488 00:11:26.294 }, 00:11:26.294 { 00:11:26.294 "name": null, 00:11:26.294 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:26.294 "is_configured": false, 00:11:26.294 "data_offset": 2048, 00:11:26.294 "data_size": 63488 00:11:26.294 } 00:11:26.294 ] 00:11:26.294 }' 00:11:26.294 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.294 04:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.569 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:26.569 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:26.569 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:26.569 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:26.569 04:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.569 04:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.569 [2024-11-21 04:09:26.412366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:26.569 [2024-11-21 04:09:26.412444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.569 [2024-11-21 04:09:26.412470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:26.569 [2024-11-21 04:09:26.412482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.569 [2024-11-21 04:09:26.412977] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.569 [2024-11-21 04:09:26.413000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:26.569 [2024-11-21 04:09:26.413091] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:26.569 [2024-11-21 04:09:26.413120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:26.569 [2024-11-21 04:09:26.413243] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:11:26.569 [2024-11-21 04:09:26.413256] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:26.569 [2024-11-21 04:09:26.413534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:11:26.569 [2024-11-21 04:09:26.413747] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:11:26.569 [2024-11-21 04:09:26.413762] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:11:26.569 [2024-11-21 04:09:26.413883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.569 pt4 00:11:26.569 04:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.569 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:26.569 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.569 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.569 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.569 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.569 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.569 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.569 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.569 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.569 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.569 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.569 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.569 04:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.569 04:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.569 04:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.569 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.569 "name": "raid_bdev1", 00:11:26.569 "uuid": "7067d5b5-8d19-4002-9f55-0734bc231e13", 00:11:26.569 "strip_size_kb": 0, 00:11:26.569 "state": "online", 00:11:26.569 "raid_level": "raid1", 00:11:26.569 "superblock": true, 00:11:26.569 "num_base_bdevs": 4, 00:11:26.569 "num_base_bdevs_discovered": 3, 00:11:26.569 "num_base_bdevs_operational": 3, 00:11:26.569 "base_bdevs_list": [ 00:11:26.569 { 00:11:26.569 "name": null, 00:11:26.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.569 "is_configured": false, 00:11:26.569 "data_offset": 2048, 00:11:26.569 "data_size": 63488 00:11:26.569 }, 00:11:26.569 { 00:11:26.569 "name": "pt2", 00:11:26.569 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.569 "is_configured": true, 00:11:26.569 "data_offset": 2048, 00:11:26.569 "data_size": 63488 00:11:26.569 }, 00:11:26.569 { 00:11:26.569 "name": "pt3", 00:11:26.569 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:26.569 "is_configured": true, 00:11:26.569 "data_offset": 2048, 00:11:26.569 "data_size": 63488 00:11:26.569 }, 00:11:26.569 { 00:11:26.569 "name": "pt4", 00:11:26.569 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:26.569 "is_configured": true, 00:11:26.569 "data_offset": 2048, 00:11:26.569 "data_size": 63488 00:11:26.569 } 00:11:26.569 ] 00:11:26.569 }' 00:11:26.569 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.569 04:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.139 [2024-11-21 04:09:26.835842] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:27.139 [2024-11-21 04:09:26.835955] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:27.139 [2024-11-21 04:09:26.836088] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:27.139 [2024-11-21 04:09:26.836244] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:27.139 [2024-11-21 04:09:26.836296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.139 [2024-11-21 04:09:26.887761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:27.139 [2024-11-21 04:09:26.887887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.139 [2024-11-21 04:09:26.887930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:27.139 [2024-11-21 04:09:26.887959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.139 [2024-11-21 04:09:26.890525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.139 [2024-11-21 04:09:26.890602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:27.139 [2024-11-21 04:09:26.890708] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:27.139 [2024-11-21 04:09:26.890794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:27.139 [2024-11-21 04:09:26.890979] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:27.139 [2024-11-21 04:09:26.891039] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:27.139 [2024-11-21 04:09:26.891103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:11:27.139 [2024-11-21 04:09:26.891188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:27.139 [2024-11-21 04:09:26.891361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:27.139 pt1 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.139 04:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.140 04:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.140 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.140 "name": "raid_bdev1", 00:11:27.140 "uuid": "7067d5b5-8d19-4002-9f55-0734bc231e13", 00:11:27.140 "strip_size_kb": 0, 00:11:27.140 "state": "configuring", 00:11:27.140 "raid_level": "raid1", 00:11:27.140 "superblock": true, 00:11:27.140 "num_base_bdevs": 4, 00:11:27.140 "num_base_bdevs_discovered": 2, 00:11:27.140 "num_base_bdevs_operational": 3, 00:11:27.140 "base_bdevs_list": [ 00:11:27.140 { 00:11:27.140 "name": null, 00:11:27.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.140 "is_configured": false, 00:11:27.140 "data_offset": 2048, 00:11:27.140 "data_size": 63488 00:11:27.140 }, 00:11:27.140 { 00:11:27.140 "name": "pt2", 00:11:27.140 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.140 "is_configured": true, 00:11:27.140 "data_offset": 2048, 00:11:27.140 "data_size": 63488 00:11:27.140 }, 00:11:27.140 { 00:11:27.140 "name": "pt3", 00:11:27.140 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:27.140 "is_configured": true, 00:11:27.140 "data_offset": 2048, 00:11:27.140 "data_size": 63488 00:11:27.140 }, 00:11:27.140 { 00:11:27.140 "name": null, 00:11:27.140 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:27.140 "is_configured": false, 00:11:27.140 "data_offset": 2048, 00:11:27.140 "data_size": 63488 00:11:27.140 } 00:11:27.140 ] 00:11:27.140 }' 00:11:27.140 04:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.140 04:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.400 04:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:27.400 04:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:27.400 04:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.400 04:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.400 04:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.400 04:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:27.400 04:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:27.400 04:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.400 04:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.400 [2024-11-21 04:09:27.370945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:27.400 [2024-11-21 04:09:27.371070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.400 [2024-11-21 04:09:27.371148] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:27.400 [2024-11-21 04:09:27.371195] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.400 [2024-11-21 04:09:27.371783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.660 [2024-11-21 04:09:27.371853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:27.660 [2024-11-21 04:09:27.371963] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:27.660 [2024-11-21 04:09:27.371996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:27.660 [2024-11-21 04:09:27.372134] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:11:27.660 [2024-11-21 04:09:27.372151] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:27.660 [2024-11-21 04:09:27.372481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:27.660 [2024-11-21 04:09:27.372627] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:11:27.660 [2024-11-21 04:09:27.372637] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:11:27.660 [2024-11-21 04:09:27.372774] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.660 pt4 00:11:27.660 04:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.660 04:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:27.660 04:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.661 04:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.661 04:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.661 04:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.661 04:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.661 04:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.661 04:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.661 04:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.661 04:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.661 04:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.661 04:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.661 04:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.661 04:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.661 04:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.661 04:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.661 "name": "raid_bdev1", 00:11:27.661 "uuid": "7067d5b5-8d19-4002-9f55-0734bc231e13", 00:11:27.661 "strip_size_kb": 0, 00:11:27.661 "state": "online", 00:11:27.661 "raid_level": "raid1", 00:11:27.661 "superblock": true, 00:11:27.661 "num_base_bdevs": 4, 00:11:27.661 "num_base_bdevs_discovered": 3, 00:11:27.661 "num_base_bdevs_operational": 3, 00:11:27.661 "base_bdevs_list": [ 00:11:27.661 { 00:11:27.661 "name": null, 00:11:27.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.661 "is_configured": false, 00:11:27.661 "data_offset": 2048, 00:11:27.661 "data_size": 63488 00:11:27.661 }, 00:11:27.661 { 00:11:27.661 "name": "pt2", 00:11:27.661 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.661 "is_configured": true, 00:11:27.661 "data_offset": 2048, 00:11:27.661 "data_size": 63488 00:11:27.661 }, 00:11:27.661 { 00:11:27.661 "name": "pt3", 00:11:27.661 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:27.661 "is_configured": true, 00:11:27.661 "data_offset": 2048, 00:11:27.661 "data_size": 63488 00:11:27.661 }, 00:11:27.661 { 00:11:27.661 "name": "pt4", 00:11:27.661 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:27.661 "is_configured": true, 00:11:27.661 "data_offset": 2048, 00:11:27.661 "data_size": 63488 00:11:27.661 } 00:11:27.661 ] 00:11:27.661 }' 00:11:27.661 04:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.661 04:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.921 04:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:27.921 04:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:27.921 04:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.921 04:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.921 04:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.921 04:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:27.921 04:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:27.921 04:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:27.921 04:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.921 04:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.921 [2024-11-21 04:09:27.878442] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:28.181 04:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.181 04:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7067d5b5-8d19-4002-9f55-0734bc231e13 '!=' 7067d5b5-8d19-4002-9f55-0734bc231e13 ']' 00:11:28.181 04:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85309 00:11:28.181 04:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 85309 ']' 00:11:28.181 04:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 85309 00:11:28.181 04:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:28.181 04:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:28.181 04:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85309 00:11:28.181 04:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:28.181 04:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:28.181 04:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85309' 00:11:28.181 killing process with pid 85309 00:11:28.181 04:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 85309 00:11:28.181 04:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 85309 00:11:28.181 [2024-11-21 04:09:27.958430] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:28.181 [2024-11-21 04:09:27.958599] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.181 [2024-11-21 04:09:27.958703] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.181 [2024-11-21 04:09:27.958713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:11:28.181 [2024-11-21 04:09:28.040484] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:28.441 04:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:28.441 00:11:28.441 real 0m7.295s 00:11:28.441 user 0m12.064s 00:11:28.441 sys 0m1.678s 00:11:28.441 ************************************ 00:11:28.441 END TEST raid_superblock_test 00:11:28.441 ************************************ 00:11:28.441 04:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.441 04:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.701 04:09:28 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:28.701 04:09:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:28.701 04:09:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.701 04:09:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:28.701 ************************************ 00:11:28.701 START TEST raid_read_error_test 00:11:28.701 ************************************ 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.EdBrpyZYG0 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85787 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85787 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 85787 ']' 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:28.701 04:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.701 [2024-11-21 04:09:28.543751] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:11:28.701 [2024-11-21 04:09:28.544076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85787 ] 00:11:28.961 [2024-11-21 04:09:28.692602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.961 [2024-11-21 04:09:28.733927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.961 [2024-11-21 04:09:28.810166] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.961 [2024-11-21 04:09:28.810208] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:29.530 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:29.530 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:29.530 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:29.530 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:29.530 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.530 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.530 BaseBdev1_malloc 00:11:29.530 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.530 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:29.530 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.530 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.530 true 00:11:29.530 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.530 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:29.530 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.530 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.530 [2024-11-21 04:09:29.449552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:29.530 [2024-11-21 04:09:29.449629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.530 [2024-11-21 04:09:29.449662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:11:29.531 [2024-11-21 04:09:29.449679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.531 [2024-11-21 04:09:29.452407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.531 [2024-11-21 04:09:29.452532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:29.531 BaseBdev1 00:11:29.531 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.531 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:29.531 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:29.531 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.531 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.531 BaseBdev2_malloc 00:11:29.531 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.531 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:29.531 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.531 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.531 true 00:11:29.531 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.531 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:29.531 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.531 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.531 [2024-11-21 04:09:29.496498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:29.531 [2024-11-21 04:09:29.496552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.531 [2024-11-21 04:09:29.496571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:11:29.531 [2024-11-21 04:09:29.496589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.531 [2024-11-21 04:09:29.498982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.531 [2024-11-21 04:09:29.499022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:29.791 BaseBdev2 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.791 BaseBdev3_malloc 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.791 true 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.791 [2024-11-21 04:09:29.543144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:29.791 [2024-11-21 04:09:29.543201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.791 [2024-11-21 04:09:29.543234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:11:29.791 [2024-11-21 04:09:29.543244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.791 [2024-11-21 04:09:29.545707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.791 [2024-11-21 04:09:29.545786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:29.791 BaseBdev3 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.791 BaseBdev4_malloc 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.791 true 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.791 [2024-11-21 04:09:29.589951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:29.791 [2024-11-21 04:09:29.590069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.791 [2024-11-21 04:09:29.590099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:29.791 [2024-11-21 04:09:29.590108] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.791 [2024-11-21 04:09:29.592517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.791 [2024-11-21 04:09:29.592552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:29.791 BaseBdev4 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.791 [2024-11-21 04:09:29.597996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:29.791 [2024-11-21 04:09:29.600154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:29.791 [2024-11-21 04:09:29.600250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:29.791 [2024-11-21 04:09:29.600310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:29.791 [2024-11-21 04:09:29.600518] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:11:29.791 [2024-11-21 04:09:29.600529] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:29.791 [2024-11-21 04:09:29.600848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:11:29.791 [2024-11-21 04:09:29.601072] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:11:29.791 [2024-11-21 04:09:29.601090] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:11:29.791 [2024-11-21 04:09:29.601207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.791 "name": "raid_bdev1", 00:11:29.791 "uuid": "68093981-89b8-469e-852e-7d4f635afe8f", 00:11:29.791 "strip_size_kb": 0, 00:11:29.791 "state": "online", 00:11:29.791 "raid_level": "raid1", 00:11:29.791 "superblock": true, 00:11:29.791 "num_base_bdevs": 4, 00:11:29.791 "num_base_bdevs_discovered": 4, 00:11:29.791 "num_base_bdevs_operational": 4, 00:11:29.791 "base_bdevs_list": [ 00:11:29.791 { 00:11:29.791 "name": "BaseBdev1", 00:11:29.791 "uuid": "1a437cc5-3c8c-55a1-bcda-e709fee183cd", 00:11:29.791 "is_configured": true, 00:11:29.791 "data_offset": 2048, 00:11:29.791 "data_size": 63488 00:11:29.791 }, 00:11:29.791 { 00:11:29.791 "name": "BaseBdev2", 00:11:29.791 "uuid": "45460198-43a6-5df6-8be9-0c1af8868c54", 00:11:29.791 "is_configured": true, 00:11:29.791 "data_offset": 2048, 00:11:29.791 "data_size": 63488 00:11:29.791 }, 00:11:29.791 { 00:11:29.791 "name": "BaseBdev3", 00:11:29.791 "uuid": "89fd8be0-7e72-5b61-8c60-099fa2963b38", 00:11:29.791 "is_configured": true, 00:11:29.791 "data_offset": 2048, 00:11:29.791 "data_size": 63488 00:11:29.791 }, 00:11:29.791 { 00:11:29.791 "name": "BaseBdev4", 00:11:29.791 "uuid": "132613b1-c74f-5e88-8637-9040f8f8d253", 00:11:29.791 "is_configured": true, 00:11:29.791 "data_offset": 2048, 00:11:29.791 "data_size": 63488 00:11:29.791 } 00:11:29.791 ] 00:11:29.791 }' 00:11:29.791 04:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.792 04:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.361 04:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:30.361 04:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:30.361 [2024-11-21 04:09:30.169563] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:11:31.301 04:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:31.301 04:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.301 04:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.301 04:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.301 04:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:31.301 04:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:31.301 04:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:31.301 04:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:31.301 04:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:31.301 04:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.301 04:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.301 04:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.301 04:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.301 04:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.301 04:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.301 04:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.301 04:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.301 04:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.301 04:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.301 04:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.301 04:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.301 04:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.301 04:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.301 04:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.301 "name": "raid_bdev1", 00:11:31.301 "uuid": "68093981-89b8-469e-852e-7d4f635afe8f", 00:11:31.301 "strip_size_kb": 0, 00:11:31.301 "state": "online", 00:11:31.301 "raid_level": "raid1", 00:11:31.301 "superblock": true, 00:11:31.301 "num_base_bdevs": 4, 00:11:31.301 "num_base_bdevs_discovered": 4, 00:11:31.301 "num_base_bdevs_operational": 4, 00:11:31.301 "base_bdevs_list": [ 00:11:31.301 { 00:11:31.301 "name": "BaseBdev1", 00:11:31.301 "uuid": "1a437cc5-3c8c-55a1-bcda-e709fee183cd", 00:11:31.301 "is_configured": true, 00:11:31.301 "data_offset": 2048, 00:11:31.301 "data_size": 63488 00:11:31.301 }, 00:11:31.301 { 00:11:31.301 "name": "BaseBdev2", 00:11:31.301 "uuid": "45460198-43a6-5df6-8be9-0c1af8868c54", 00:11:31.302 "is_configured": true, 00:11:31.302 "data_offset": 2048, 00:11:31.302 "data_size": 63488 00:11:31.302 }, 00:11:31.302 { 00:11:31.302 "name": "BaseBdev3", 00:11:31.302 "uuid": "89fd8be0-7e72-5b61-8c60-099fa2963b38", 00:11:31.302 "is_configured": true, 00:11:31.302 "data_offset": 2048, 00:11:31.302 "data_size": 63488 00:11:31.302 }, 00:11:31.302 { 00:11:31.302 "name": "BaseBdev4", 00:11:31.302 "uuid": "132613b1-c74f-5e88-8637-9040f8f8d253", 00:11:31.302 "is_configured": true, 00:11:31.302 "data_offset": 2048, 00:11:31.302 "data_size": 63488 00:11:31.302 } 00:11:31.302 ] 00:11:31.302 }' 00:11:31.302 04:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.302 04:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.871 04:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:31.871 04:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.871 04:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.871 [2024-11-21 04:09:31.563732] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:31.871 [2024-11-21 04:09:31.563773] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:31.871 [2024-11-21 04:09:31.566160] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:31.871 [2024-11-21 04:09:31.566249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.871 [2024-11-21 04:09:31.566385] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:31.872 [2024-11-21 04:09:31.566412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:11:31.872 { 00:11:31.872 "results": [ 00:11:31.872 { 00:11:31.872 "job": "raid_bdev1", 00:11:31.872 "core_mask": "0x1", 00:11:31.872 "workload": "randrw", 00:11:31.872 "percentage": 50, 00:11:31.872 "status": "finished", 00:11:31.872 "queue_depth": 1, 00:11:31.872 "io_size": 131072, 00:11:31.872 "runtime": 1.394254, 00:11:31.872 "iops": 8375.088039912383, 00:11:31.872 "mibps": 1046.886004989048, 00:11:31.872 "io_failed": 0, 00:11:31.872 "io_timeout": 0, 00:11:31.872 "avg_latency_us": 116.77437578369451, 00:11:31.872 "min_latency_us": 22.581659388646287, 00:11:31.872 "max_latency_us": 1488.1537117903931 00:11:31.872 } 00:11:31.872 ], 00:11:31.872 "core_count": 1 00:11:31.872 } 00:11:31.872 04:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.872 04:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85787 00:11:31.872 04:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 85787 ']' 00:11:31.872 04:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 85787 00:11:31.872 04:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:31.872 04:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.872 04:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85787 00:11:31.872 04:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:31.872 04:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:31.872 04:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85787' 00:11:31.872 killing process with pid 85787 00:11:31.872 04:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 85787 00:11:31.872 [2024-11-21 04:09:31.609837] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:31.872 04:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 85787 00:11:31.872 [2024-11-21 04:09:31.674814] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:32.132 04:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.EdBrpyZYG0 00:11:32.132 04:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:32.132 04:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:32.132 04:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:32.132 04:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:32.132 04:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:32.132 04:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:32.132 04:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:32.132 00:11:32.132 real 0m3.570s 00:11:32.132 user 0m4.449s 00:11:32.132 sys 0m0.675s 00:11:32.132 ************************************ 00:11:32.132 END TEST raid_read_error_test 00:11:32.132 ************************************ 00:11:32.132 04:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.132 04:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.132 04:09:32 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:32.132 04:09:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:32.132 04:09:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.132 04:09:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:32.132 ************************************ 00:11:32.132 START TEST raid_write_error_test 00:11:32.132 ************************************ 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:32.132 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.N86LSowIhJ 00:11:32.392 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85920 00:11:32.392 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:32.392 04:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85920 00:11:32.392 04:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 85920 ']' 00:11:32.392 04:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.392 04:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.392 04:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.392 04:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.392 04:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.392 [2024-11-21 04:09:32.191859] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:11:32.392 [2024-11-21 04:09:32.192099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85920 ] 00:11:32.392 [2024-11-21 04:09:32.328651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.652 [2024-11-21 04:09:32.371926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.652 [2024-11-21 04:09:32.448539] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.652 [2024-11-21 04:09:32.448645] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.254 BaseBdev1_malloc 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.254 true 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.254 [2024-11-21 04:09:33.062889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:33.254 [2024-11-21 04:09:33.062969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.254 [2024-11-21 04:09:33.063000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:11:33.254 [2024-11-21 04:09:33.063010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.254 [2024-11-21 04:09:33.065632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.254 [2024-11-21 04:09:33.065758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:33.254 BaseBdev1 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.254 BaseBdev2_malloc 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.254 true 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.254 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.254 [2024-11-21 04:09:33.097565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:33.254 [2024-11-21 04:09:33.097634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.254 [2024-11-21 04:09:33.097653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:11:33.254 [2024-11-21 04:09:33.097671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.254 [2024-11-21 04:09:33.100151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.255 [2024-11-21 04:09:33.100192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:33.255 BaseBdev2 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.255 BaseBdev3_malloc 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.255 true 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.255 [2024-11-21 04:09:33.132207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:33.255 [2024-11-21 04:09:33.132351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.255 [2024-11-21 04:09:33.132379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:11:33.255 [2024-11-21 04:09:33.132389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.255 [2024-11-21 04:09:33.134963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.255 [2024-11-21 04:09:33.134999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:33.255 BaseBdev3 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.255 BaseBdev4_malloc 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.255 true 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.255 [2024-11-21 04:09:33.174909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:33.255 [2024-11-21 04:09:33.175037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.255 [2024-11-21 04:09:33.175067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:33.255 [2024-11-21 04:09:33.175076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.255 [2024-11-21 04:09:33.177488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.255 [2024-11-21 04:09:33.177524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:33.255 BaseBdev4 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.255 [2024-11-21 04:09:33.186941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:33.255 [2024-11-21 04:09:33.189143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:33.255 [2024-11-21 04:09:33.189309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:33.255 [2024-11-21 04:09:33.189370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:33.255 [2024-11-21 04:09:33.189601] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:11:33.255 [2024-11-21 04:09:33.189614] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:33.255 [2024-11-21 04:09:33.189888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:11:33.255 [2024-11-21 04:09:33.190029] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:11:33.255 [2024-11-21 04:09:33.190050] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:11:33.255 [2024-11-21 04:09:33.190194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.255 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.514 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.514 "name": "raid_bdev1", 00:11:33.514 "uuid": "a0c1abb3-4a58-489f-a863-3e3b3e8ffc37", 00:11:33.514 "strip_size_kb": 0, 00:11:33.514 "state": "online", 00:11:33.514 "raid_level": "raid1", 00:11:33.514 "superblock": true, 00:11:33.514 "num_base_bdevs": 4, 00:11:33.514 "num_base_bdevs_discovered": 4, 00:11:33.514 "num_base_bdevs_operational": 4, 00:11:33.514 "base_bdevs_list": [ 00:11:33.514 { 00:11:33.514 "name": "BaseBdev1", 00:11:33.514 "uuid": "ae9eece7-290c-53a3-aba2-da38ea443979", 00:11:33.514 "is_configured": true, 00:11:33.514 "data_offset": 2048, 00:11:33.514 "data_size": 63488 00:11:33.514 }, 00:11:33.514 { 00:11:33.514 "name": "BaseBdev2", 00:11:33.514 "uuid": "e6ec7313-0c6d-528d-b3fa-de521a385eda", 00:11:33.514 "is_configured": true, 00:11:33.515 "data_offset": 2048, 00:11:33.515 "data_size": 63488 00:11:33.515 }, 00:11:33.515 { 00:11:33.515 "name": "BaseBdev3", 00:11:33.515 "uuid": "c23765f0-0c5a-5111-b75f-d30971164fe8", 00:11:33.515 "is_configured": true, 00:11:33.515 "data_offset": 2048, 00:11:33.515 "data_size": 63488 00:11:33.515 }, 00:11:33.515 { 00:11:33.515 "name": "BaseBdev4", 00:11:33.515 "uuid": "8865efb5-a7c9-5b8e-b798-69093443caee", 00:11:33.515 "is_configured": true, 00:11:33.515 "data_offset": 2048, 00:11:33.515 "data_size": 63488 00:11:33.515 } 00:11:33.515 ] 00:11:33.515 }' 00:11:33.515 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.515 04:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.774 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:33.774 04:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:34.034 [2024-11-21 04:09:33.786473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:11:34.971 04:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:34.971 04:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.971 04:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.971 [2024-11-21 04:09:34.699346] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:34.971 [2024-11-21 04:09:34.699491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:34.971 [2024-11-21 04:09:34.699803] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000003090 00:11:34.971 04:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.971 04:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:34.971 04:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:34.971 04:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:34.971 04:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:34.971 04:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:34.971 04:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.971 04:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.971 04:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.971 04:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.971 04:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.971 04:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.971 04:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.971 04:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.971 04:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.971 04:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.971 04:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.971 04:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.971 04:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.971 04:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.971 04:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.971 "name": "raid_bdev1", 00:11:34.971 "uuid": "a0c1abb3-4a58-489f-a863-3e3b3e8ffc37", 00:11:34.971 "strip_size_kb": 0, 00:11:34.971 "state": "online", 00:11:34.971 "raid_level": "raid1", 00:11:34.971 "superblock": true, 00:11:34.971 "num_base_bdevs": 4, 00:11:34.971 "num_base_bdevs_discovered": 3, 00:11:34.971 "num_base_bdevs_operational": 3, 00:11:34.971 "base_bdevs_list": [ 00:11:34.971 { 00:11:34.971 "name": null, 00:11:34.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.971 "is_configured": false, 00:11:34.971 "data_offset": 0, 00:11:34.971 "data_size": 63488 00:11:34.971 }, 00:11:34.971 { 00:11:34.971 "name": "BaseBdev2", 00:11:34.971 "uuid": "e6ec7313-0c6d-528d-b3fa-de521a385eda", 00:11:34.971 "is_configured": true, 00:11:34.971 "data_offset": 2048, 00:11:34.971 "data_size": 63488 00:11:34.971 }, 00:11:34.971 { 00:11:34.971 "name": "BaseBdev3", 00:11:34.971 "uuid": "c23765f0-0c5a-5111-b75f-d30971164fe8", 00:11:34.971 "is_configured": true, 00:11:34.971 "data_offset": 2048, 00:11:34.971 "data_size": 63488 00:11:34.971 }, 00:11:34.971 { 00:11:34.971 "name": "BaseBdev4", 00:11:34.971 "uuid": "8865efb5-a7c9-5b8e-b798-69093443caee", 00:11:34.971 "is_configured": true, 00:11:34.971 "data_offset": 2048, 00:11:34.971 "data_size": 63488 00:11:34.971 } 00:11:34.971 ] 00:11:34.971 }' 00:11:34.971 04:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.971 04:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.231 04:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:35.231 04:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.231 04:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.231 [2024-11-21 04:09:35.174391] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:35.231 [2024-11-21 04:09:35.174439] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:35.231 [2024-11-21 04:09:35.177060] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.231 [2024-11-21 04:09:35.177199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.231 [2024-11-21 04:09:35.177336] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:35.231 [2024-11-21 04:09:35.177353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:11:35.231 { 00:11:35.231 "results": [ 00:11:35.231 { 00:11:35.231 "job": "raid_bdev1", 00:11:35.231 "core_mask": "0x1", 00:11:35.231 "workload": "randrw", 00:11:35.231 "percentage": 50, 00:11:35.231 "status": "finished", 00:11:35.231 "queue_depth": 1, 00:11:35.231 "io_size": 131072, 00:11:35.231 "runtime": 1.388458, 00:11:35.231 "iops": 8924.28867131739, 00:11:35.231 "mibps": 1115.5360839146738, 00:11:35.231 "io_failed": 0, 00:11:35.231 "io_timeout": 0, 00:11:35.231 "avg_latency_us": 109.33756780083023, 00:11:35.231 "min_latency_us": 23.475982532751093, 00:11:35.231 "max_latency_us": 1538.235807860262 00:11:35.231 } 00:11:35.231 ], 00:11:35.231 "core_count": 1 00:11:35.231 } 00:11:35.231 04:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.231 04:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85920 00:11:35.231 04:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 85920 ']' 00:11:35.231 04:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 85920 00:11:35.231 04:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:35.231 04:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:35.231 04:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85920 00:11:35.490 04:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:35.490 04:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:35.490 04:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85920' 00:11:35.490 killing process with pid 85920 00:11:35.490 04:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 85920 00:11:35.490 [2024-11-21 04:09:35.212127] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:35.490 04:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 85920 00:11:35.490 [2024-11-21 04:09:35.281862] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:35.750 04:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.N86LSowIhJ 00:11:35.750 04:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:35.750 04:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:35.750 04:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:35.750 04:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:35.750 04:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:35.750 04:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:35.750 04:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:35.750 00:11:35.750 real 0m3.538s 00:11:35.750 user 0m4.359s 00:11:35.750 sys 0m0.651s 00:11:35.750 04:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.750 ************************************ 00:11:35.750 END TEST raid_write_error_test 00:11:35.750 ************************************ 00:11:35.750 04:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.750 04:09:35 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:11:35.750 04:09:35 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:35.750 04:09:35 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:11:35.750 04:09:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:35.750 04:09:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.750 04:09:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:35.750 ************************************ 00:11:35.750 START TEST raid_rebuild_test 00:11:35.750 ************************************ 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=86054 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 86054 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 86054 ']' 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.750 04:09:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.009 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:36.009 Zero copy mechanism will not be used. 00:11:36.009 [2024-11-21 04:09:35.795689] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:11:36.009 [2024-11-21 04:09:35.795817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86054 ] 00:11:36.009 [2024-11-21 04:09:35.950059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.268 [2024-11-21 04:09:35.993033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.268 [2024-11-21 04:09:36.069795] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.268 [2024-11-21 04:09:36.069919] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.839 BaseBdev1_malloc 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.839 [2024-11-21 04:09:36.648482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:36.839 [2024-11-21 04:09:36.648635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.839 [2024-11-21 04:09:36.648709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:36.839 [2024-11-21 04:09:36.648754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.839 [2024-11-21 04:09:36.651355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.839 [2024-11-21 04:09:36.651420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:36.839 BaseBdev1 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.839 BaseBdev2_malloc 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.839 [2024-11-21 04:09:36.683083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:36.839 [2024-11-21 04:09:36.683192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.839 [2024-11-21 04:09:36.683242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:36.839 [2024-11-21 04:09:36.683271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.839 [2024-11-21 04:09:36.685682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.839 [2024-11-21 04:09:36.685760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:36.839 BaseBdev2 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.839 spare_malloc 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.839 spare_delay 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.839 [2024-11-21 04:09:36.730286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:36.839 [2024-11-21 04:09:36.730350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.839 [2024-11-21 04:09:36.730374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:36.839 [2024-11-21 04:09:36.730384] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.839 [2024-11-21 04:09:36.732938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.839 [2024-11-21 04:09:36.732978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:36.839 spare 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.839 [2024-11-21 04:09:36.742307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.839 [2024-11-21 04:09:36.744535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:36.839 [2024-11-21 04:09:36.744654] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:36.839 [2024-11-21 04:09:36.744673] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:36.839 [2024-11-21 04:09:36.744986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:36.839 [2024-11-21 04:09:36.745137] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:36.839 [2024-11-21 04:09:36.745153] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:36.839 [2024-11-21 04:09:36.745316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.839 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.840 04:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.840 "name": "raid_bdev1", 00:11:36.840 "uuid": "90f82519-9cd4-415b-b885-ae75c754f228", 00:11:36.840 "strip_size_kb": 0, 00:11:36.840 "state": "online", 00:11:36.840 "raid_level": "raid1", 00:11:36.840 "superblock": false, 00:11:36.840 "num_base_bdevs": 2, 00:11:36.840 "num_base_bdevs_discovered": 2, 00:11:36.840 "num_base_bdevs_operational": 2, 00:11:36.840 "base_bdevs_list": [ 00:11:36.840 { 00:11:36.840 "name": "BaseBdev1", 00:11:36.840 "uuid": "9b12f35d-ffb6-5ace-b2cd-008de2401071", 00:11:36.840 "is_configured": true, 00:11:36.840 "data_offset": 0, 00:11:36.840 "data_size": 65536 00:11:36.840 }, 00:11:36.840 { 00:11:36.840 "name": "BaseBdev2", 00:11:36.840 "uuid": "43b37124-92b3-5d73-abb5-ca3a9724e03d", 00:11:36.840 "is_configured": true, 00:11:36.840 "data_offset": 0, 00:11:36.840 "data_size": 65536 00:11:36.840 } 00:11:36.840 ] 00:11:36.840 }' 00:11:36.840 04:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.840 04:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.409 04:09:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:37.409 04:09:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:37.409 04:09:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.409 04:09:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.409 [2024-11-21 04:09:37.221849] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:37.409 04:09:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.409 04:09:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:37.409 04:09:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.409 04:09:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.409 04:09:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.409 04:09:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:37.409 04:09:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.409 04:09:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:37.409 04:09:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:37.409 04:09:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:37.409 04:09:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:37.409 04:09:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:37.409 04:09:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:37.409 04:09:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:37.409 04:09:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:37.409 04:09:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:37.409 04:09:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:37.409 04:09:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:37.409 04:09:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:37.409 04:09:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:37.409 04:09:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:37.668 [2024-11-21 04:09:37.517141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:11:37.668 /dev/nbd0 00:11:37.668 04:09:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:37.668 04:09:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:37.668 04:09:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:37.668 04:09:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:37.668 04:09:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:37.668 04:09:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:37.668 04:09:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:37.668 04:09:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:37.668 04:09:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:37.668 04:09:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:37.669 04:09:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:37.669 1+0 records in 00:11:37.669 1+0 records out 00:11:37.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482271 s, 8.5 MB/s 00:11:37.669 04:09:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.669 04:09:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:37.669 04:09:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.669 04:09:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:37.669 04:09:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:37.669 04:09:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:37.669 04:09:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:37.669 04:09:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:37.669 04:09:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:37.669 04:09:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:42.949 65536+0 records in 00:11:42.949 65536+0 records out 00:11:42.950 33554432 bytes (34 MB, 32 MiB) copied, 4.46927 s, 7.5 MB/s 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:42.950 [2024-11-21 04:09:42.274951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.950 [2024-11-21 04:09:42.311018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.950 "name": "raid_bdev1", 00:11:42.950 "uuid": "90f82519-9cd4-415b-b885-ae75c754f228", 00:11:42.950 "strip_size_kb": 0, 00:11:42.950 "state": "online", 00:11:42.950 "raid_level": "raid1", 00:11:42.950 "superblock": false, 00:11:42.950 "num_base_bdevs": 2, 00:11:42.950 "num_base_bdevs_discovered": 1, 00:11:42.950 "num_base_bdevs_operational": 1, 00:11:42.950 "base_bdevs_list": [ 00:11:42.950 { 00:11:42.950 "name": null, 00:11:42.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.950 "is_configured": false, 00:11:42.950 "data_offset": 0, 00:11:42.950 "data_size": 65536 00:11:42.950 }, 00:11:42.950 { 00:11:42.950 "name": "BaseBdev2", 00:11:42.950 "uuid": "43b37124-92b3-5d73-abb5-ca3a9724e03d", 00:11:42.950 "is_configured": true, 00:11:42.950 "data_offset": 0, 00:11:42.950 "data_size": 65536 00:11:42.950 } 00:11:42.950 ] 00:11:42.950 }' 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.950 [2024-11-21 04:09:42.778312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:42.950 [2024-11-21 04:09:42.794443] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06220 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.950 04:09:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:42.950 [2024-11-21 04:09:42.797092] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:43.891 04:09:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:43.891 04:09:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:43.891 04:09:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:43.891 04:09:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:43.891 04:09:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:43.891 04:09:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.891 04:09:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.891 04:09:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.891 04:09:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.891 04:09:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.891 04:09:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:43.891 "name": "raid_bdev1", 00:11:43.891 "uuid": "90f82519-9cd4-415b-b885-ae75c754f228", 00:11:43.891 "strip_size_kb": 0, 00:11:43.891 "state": "online", 00:11:43.891 "raid_level": "raid1", 00:11:43.891 "superblock": false, 00:11:43.891 "num_base_bdevs": 2, 00:11:43.891 "num_base_bdevs_discovered": 2, 00:11:43.891 "num_base_bdevs_operational": 2, 00:11:43.891 "process": { 00:11:43.891 "type": "rebuild", 00:11:43.891 "target": "spare", 00:11:43.891 "progress": { 00:11:43.891 "blocks": 20480, 00:11:43.891 "percent": 31 00:11:43.891 } 00:11:43.891 }, 00:11:43.891 "base_bdevs_list": [ 00:11:43.891 { 00:11:43.891 "name": "spare", 00:11:43.891 "uuid": "37152a90-083b-5bc7-92a1-b80669003130", 00:11:43.891 "is_configured": true, 00:11:43.891 "data_offset": 0, 00:11:43.891 "data_size": 65536 00:11:43.891 }, 00:11:43.891 { 00:11:43.891 "name": "BaseBdev2", 00:11:43.891 "uuid": "43b37124-92b3-5d73-abb5-ca3a9724e03d", 00:11:43.891 "is_configured": true, 00:11:43.891 "data_offset": 0, 00:11:43.891 "data_size": 65536 00:11:43.891 } 00:11:43.891 ] 00:11:43.891 }' 00:11:43.891 04:09:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:44.152 04:09:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:44.152 04:09:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:44.152 04:09:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:44.152 04:09:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:44.152 04:09:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.152 04:09:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.152 [2024-11-21 04:09:43.962072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:44.152 [2024-11-21 04:09:44.006949] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:44.152 [2024-11-21 04:09:44.007023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.152 [2024-11-21 04:09:44.007046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:44.152 [2024-11-21 04:09:44.007054] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:44.152 04:09:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.152 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:44.152 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.152 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.152 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.152 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.152 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:44.152 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.152 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.152 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.152 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.152 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.152 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.152 04:09:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.152 04:09:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.152 04:09:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.152 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.152 "name": "raid_bdev1", 00:11:44.152 "uuid": "90f82519-9cd4-415b-b885-ae75c754f228", 00:11:44.152 "strip_size_kb": 0, 00:11:44.152 "state": "online", 00:11:44.152 "raid_level": "raid1", 00:11:44.152 "superblock": false, 00:11:44.152 "num_base_bdevs": 2, 00:11:44.152 "num_base_bdevs_discovered": 1, 00:11:44.152 "num_base_bdevs_operational": 1, 00:11:44.152 "base_bdevs_list": [ 00:11:44.152 { 00:11:44.152 "name": null, 00:11:44.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.152 "is_configured": false, 00:11:44.152 "data_offset": 0, 00:11:44.152 "data_size": 65536 00:11:44.152 }, 00:11:44.152 { 00:11:44.152 "name": "BaseBdev2", 00:11:44.152 "uuid": "43b37124-92b3-5d73-abb5-ca3a9724e03d", 00:11:44.152 "is_configured": true, 00:11:44.152 "data_offset": 0, 00:11:44.152 "data_size": 65536 00:11:44.152 } 00:11:44.152 ] 00:11:44.152 }' 00:11:44.152 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.152 04:09:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.725 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:44.725 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:44.725 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:44.725 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:44.725 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:44.725 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.725 04:09:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.725 04:09:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.725 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.725 04:09:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.725 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:44.725 "name": "raid_bdev1", 00:11:44.725 "uuid": "90f82519-9cd4-415b-b885-ae75c754f228", 00:11:44.725 "strip_size_kb": 0, 00:11:44.725 "state": "online", 00:11:44.725 "raid_level": "raid1", 00:11:44.725 "superblock": false, 00:11:44.725 "num_base_bdevs": 2, 00:11:44.725 "num_base_bdevs_discovered": 1, 00:11:44.725 "num_base_bdevs_operational": 1, 00:11:44.725 "base_bdevs_list": [ 00:11:44.725 { 00:11:44.725 "name": null, 00:11:44.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.725 "is_configured": false, 00:11:44.725 "data_offset": 0, 00:11:44.725 "data_size": 65536 00:11:44.725 }, 00:11:44.725 { 00:11:44.725 "name": "BaseBdev2", 00:11:44.725 "uuid": "43b37124-92b3-5d73-abb5-ca3a9724e03d", 00:11:44.725 "is_configured": true, 00:11:44.725 "data_offset": 0, 00:11:44.725 "data_size": 65536 00:11:44.725 } 00:11:44.725 ] 00:11:44.725 }' 00:11:44.725 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:44.725 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:44.725 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:44.725 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:44.725 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:44.725 04:09:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.725 04:09:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.725 [2024-11-21 04:09:44.595028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:44.725 [2024-11-21 04:09:44.604278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d062f0 00:11:44.725 04:09:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.725 04:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:44.725 [2024-11-21 04:09:44.606456] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:45.719 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:45.719 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:45.719 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:45.719 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:45.719 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:45.719 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.719 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.719 04:09:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.719 04:09:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.719 04:09:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.719 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:45.719 "name": "raid_bdev1", 00:11:45.719 "uuid": "90f82519-9cd4-415b-b885-ae75c754f228", 00:11:45.719 "strip_size_kb": 0, 00:11:45.719 "state": "online", 00:11:45.719 "raid_level": "raid1", 00:11:45.719 "superblock": false, 00:11:45.719 "num_base_bdevs": 2, 00:11:45.719 "num_base_bdevs_discovered": 2, 00:11:45.719 "num_base_bdevs_operational": 2, 00:11:45.719 "process": { 00:11:45.719 "type": "rebuild", 00:11:45.719 "target": "spare", 00:11:45.719 "progress": { 00:11:45.719 "blocks": 20480, 00:11:45.719 "percent": 31 00:11:45.719 } 00:11:45.719 }, 00:11:45.719 "base_bdevs_list": [ 00:11:45.719 { 00:11:45.719 "name": "spare", 00:11:45.719 "uuid": "37152a90-083b-5bc7-92a1-b80669003130", 00:11:45.719 "is_configured": true, 00:11:45.719 "data_offset": 0, 00:11:45.719 "data_size": 65536 00:11:45.719 }, 00:11:45.719 { 00:11:45.719 "name": "BaseBdev2", 00:11:45.719 "uuid": "43b37124-92b3-5d73-abb5-ca3a9724e03d", 00:11:45.719 "is_configured": true, 00:11:45.719 "data_offset": 0, 00:11:45.719 "data_size": 65536 00:11:45.719 } 00:11:45.719 ] 00:11:45.719 }' 00:11:45.719 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:45.979 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:45.979 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:45.979 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:45.979 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:45.979 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:45.979 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:45.979 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:45.979 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=301 00:11:45.979 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:45.979 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:45.979 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:45.979 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:45.979 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:45.979 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:45.979 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.979 04:09:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.979 04:09:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.979 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.979 04:09:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.979 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:45.979 "name": "raid_bdev1", 00:11:45.979 "uuid": "90f82519-9cd4-415b-b885-ae75c754f228", 00:11:45.979 "strip_size_kb": 0, 00:11:45.979 "state": "online", 00:11:45.979 "raid_level": "raid1", 00:11:45.979 "superblock": false, 00:11:45.979 "num_base_bdevs": 2, 00:11:45.979 "num_base_bdevs_discovered": 2, 00:11:45.979 "num_base_bdevs_operational": 2, 00:11:45.979 "process": { 00:11:45.979 "type": "rebuild", 00:11:45.979 "target": "spare", 00:11:45.979 "progress": { 00:11:45.979 "blocks": 22528, 00:11:45.979 "percent": 34 00:11:45.979 } 00:11:45.979 }, 00:11:45.979 "base_bdevs_list": [ 00:11:45.979 { 00:11:45.979 "name": "spare", 00:11:45.979 "uuid": "37152a90-083b-5bc7-92a1-b80669003130", 00:11:45.979 "is_configured": true, 00:11:45.979 "data_offset": 0, 00:11:45.979 "data_size": 65536 00:11:45.979 }, 00:11:45.979 { 00:11:45.979 "name": "BaseBdev2", 00:11:45.979 "uuid": "43b37124-92b3-5d73-abb5-ca3a9724e03d", 00:11:45.979 "is_configured": true, 00:11:45.979 "data_offset": 0, 00:11:45.979 "data_size": 65536 00:11:45.979 } 00:11:45.979 ] 00:11:45.979 }' 00:11:45.979 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:45.979 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:45.979 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:45.979 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:45.979 04:09:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:46.966 04:09:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:46.966 04:09:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:46.966 04:09:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:46.966 04:09:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:46.966 04:09:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:46.966 04:09:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:46.966 04:09:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.966 04:09:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.966 04:09:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.966 04:09:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.225 04:09:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.225 04:09:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:47.225 "name": "raid_bdev1", 00:11:47.225 "uuid": "90f82519-9cd4-415b-b885-ae75c754f228", 00:11:47.225 "strip_size_kb": 0, 00:11:47.225 "state": "online", 00:11:47.225 "raid_level": "raid1", 00:11:47.225 "superblock": false, 00:11:47.225 "num_base_bdevs": 2, 00:11:47.225 "num_base_bdevs_discovered": 2, 00:11:47.225 "num_base_bdevs_operational": 2, 00:11:47.225 "process": { 00:11:47.225 "type": "rebuild", 00:11:47.225 "target": "spare", 00:11:47.225 "progress": { 00:11:47.225 "blocks": 47104, 00:11:47.225 "percent": 71 00:11:47.225 } 00:11:47.225 }, 00:11:47.225 "base_bdevs_list": [ 00:11:47.225 { 00:11:47.225 "name": "spare", 00:11:47.225 "uuid": "37152a90-083b-5bc7-92a1-b80669003130", 00:11:47.225 "is_configured": true, 00:11:47.225 "data_offset": 0, 00:11:47.225 "data_size": 65536 00:11:47.225 }, 00:11:47.225 { 00:11:47.225 "name": "BaseBdev2", 00:11:47.225 "uuid": "43b37124-92b3-5d73-abb5-ca3a9724e03d", 00:11:47.225 "is_configured": true, 00:11:47.225 "data_offset": 0, 00:11:47.225 "data_size": 65536 00:11:47.225 } 00:11:47.225 ] 00:11:47.225 }' 00:11:47.225 04:09:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:47.225 04:09:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:47.225 04:09:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:47.225 04:09:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:47.225 04:09:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:48.163 [2024-11-21 04:09:47.830541] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:48.163 [2024-11-21 04:09:47.830650] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:48.163 [2024-11-21 04:09:47.830717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.163 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:48.163 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:48.163 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:48.163 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:48.163 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:48.163 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:48.163 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.163 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.163 04:09:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.163 04:09:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.163 04:09:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.163 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:48.163 "name": "raid_bdev1", 00:11:48.163 "uuid": "90f82519-9cd4-415b-b885-ae75c754f228", 00:11:48.163 "strip_size_kb": 0, 00:11:48.163 "state": "online", 00:11:48.163 "raid_level": "raid1", 00:11:48.163 "superblock": false, 00:11:48.163 "num_base_bdevs": 2, 00:11:48.163 "num_base_bdevs_discovered": 2, 00:11:48.163 "num_base_bdevs_operational": 2, 00:11:48.163 "base_bdevs_list": [ 00:11:48.163 { 00:11:48.163 "name": "spare", 00:11:48.163 "uuid": "37152a90-083b-5bc7-92a1-b80669003130", 00:11:48.163 "is_configured": true, 00:11:48.163 "data_offset": 0, 00:11:48.163 "data_size": 65536 00:11:48.163 }, 00:11:48.163 { 00:11:48.163 "name": "BaseBdev2", 00:11:48.163 "uuid": "43b37124-92b3-5d73-abb5-ca3a9724e03d", 00:11:48.163 "is_configured": true, 00:11:48.163 "data_offset": 0, 00:11:48.163 "data_size": 65536 00:11:48.163 } 00:11:48.163 ] 00:11:48.163 }' 00:11:48.163 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:48.423 "name": "raid_bdev1", 00:11:48.423 "uuid": "90f82519-9cd4-415b-b885-ae75c754f228", 00:11:48.423 "strip_size_kb": 0, 00:11:48.423 "state": "online", 00:11:48.423 "raid_level": "raid1", 00:11:48.423 "superblock": false, 00:11:48.423 "num_base_bdevs": 2, 00:11:48.423 "num_base_bdevs_discovered": 2, 00:11:48.423 "num_base_bdevs_operational": 2, 00:11:48.423 "base_bdevs_list": [ 00:11:48.423 { 00:11:48.423 "name": "spare", 00:11:48.423 "uuid": "37152a90-083b-5bc7-92a1-b80669003130", 00:11:48.423 "is_configured": true, 00:11:48.423 "data_offset": 0, 00:11:48.423 "data_size": 65536 00:11:48.423 }, 00:11:48.423 { 00:11:48.423 "name": "BaseBdev2", 00:11:48.423 "uuid": "43b37124-92b3-5d73-abb5-ca3a9724e03d", 00:11:48.423 "is_configured": true, 00:11:48.423 "data_offset": 0, 00:11:48.423 "data_size": 65536 00:11:48.423 } 00:11:48.423 ] 00:11:48.423 }' 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.423 "name": "raid_bdev1", 00:11:48.423 "uuid": "90f82519-9cd4-415b-b885-ae75c754f228", 00:11:48.423 "strip_size_kb": 0, 00:11:48.423 "state": "online", 00:11:48.423 "raid_level": "raid1", 00:11:48.423 "superblock": false, 00:11:48.423 "num_base_bdevs": 2, 00:11:48.423 "num_base_bdevs_discovered": 2, 00:11:48.423 "num_base_bdevs_operational": 2, 00:11:48.423 "base_bdevs_list": [ 00:11:48.423 { 00:11:48.423 "name": "spare", 00:11:48.423 "uuid": "37152a90-083b-5bc7-92a1-b80669003130", 00:11:48.423 "is_configured": true, 00:11:48.423 "data_offset": 0, 00:11:48.423 "data_size": 65536 00:11:48.423 }, 00:11:48.423 { 00:11:48.423 "name": "BaseBdev2", 00:11:48.423 "uuid": "43b37124-92b3-5d73-abb5-ca3a9724e03d", 00:11:48.423 "is_configured": true, 00:11:48.423 "data_offset": 0, 00:11:48.423 "data_size": 65536 00:11:48.423 } 00:11:48.423 ] 00:11:48.423 }' 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.423 04:09:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.993 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:48.993 04:09:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.993 04:09:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.993 [2024-11-21 04:09:48.777855] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.993 [2024-11-21 04:09:48.777990] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.993 [2024-11-21 04:09:48.778125] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.993 [2024-11-21 04:09:48.778279] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.993 [2024-11-21 04:09:48.778357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:48.993 04:09:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.993 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.993 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:48.993 04:09:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.993 04:09:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.993 04:09:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.993 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:48.993 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:48.993 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:48.993 04:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:48.993 04:09:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:48.993 04:09:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:48.993 04:09:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:48.993 04:09:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:48.993 04:09:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:48.993 04:09:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:48.993 04:09:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:48.993 04:09:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:48.993 04:09:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:49.253 /dev/nbd0 00:11:49.253 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:49.253 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:49.253 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:49.253 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:49.253 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:49.253 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:49.253 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:49.253 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:49.253 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:49.253 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:49.253 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:49.253 1+0 records in 00:11:49.253 1+0 records out 00:11:49.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361856 s, 11.3 MB/s 00:11:49.253 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.253 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:49.253 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.253 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:49.253 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:49.253 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:49.253 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:49.253 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:49.513 /dev/nbd1 00:11:49.513 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:49.513 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:49.513 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:49.513 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:49.513 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:49.513 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:49.513 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:49.513 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:49.513 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:49.513 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:49.513 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:49.513 1+0 records in 00:11:49.513 1+0 records out 00:11:49.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317146 s, 12.9 MB/s 00:11:49.513 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.513 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:49.513 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.513 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:49.513 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:49.513 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:49.513 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:49.513 04:09:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:49.513 04:09:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:49.513 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:49.513 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:49.513 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:49.513 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:49.513 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.513 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:49.773 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:49.773 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:49.773 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:49.773 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:49.773 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:49.773 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:50.033 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:50.033 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.033 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.033 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:50.033 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:50.033 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:50.033 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:50.033 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.033 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.033 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:50.033 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:50.033 04:09:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.033 04:09:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:50.033 04:09:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 86054 00:11:50.033 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 86054 ']' 00:11:50.033 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 86054 00:11:50.033 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:11:50.033 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.033 04:09:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86054 00:11:50.293 04:09:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:50.293 04:09:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:50.293 04:09:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86054' 00:11:50.293 killing process with pid 86054 00:11:50.293 04:09:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 86054 00:11:50.293 Received shutdown signal, test time was about 60.000000 seconds 00:11:50.293 00:11:50.293 Latency(us) 00:11:50.293 [2024-11-21T04:09:50.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.293 [2024-11-21T04:09:50.266Z] =================================================================================================================== 00:11:50.293 [2024-11-21T04:09:50.266Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:50.293 04:09:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 86054 00:11:50.293 [2024-11-21 04:09:50.021072] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:50.293 [2024-11-21 04:09:50.081782] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:50.553 ************************************ 00:11:50.553 END TEST raid_rebuild_test 00:11:50.553 ************************************ 00:11:50.553 00:11:50.553 real 0m14.719s 00:11:50.553 user 0m16.567s 00:11:50.553 sys 0m3.270s 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.553 04:09:50 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:11:50.553 04:09:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:50.553 04:09:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.553 04:09:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:50.553 ************************************ 00:11:50.553 START TEST raid_rebuild_test_sb 00:11:50.553 ************************************ 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:50.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86468 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86468 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 86468 ']' 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.553 04:09:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:50.812 [2024-11-21 04:09:50.580535] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:11:50.812 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:50.812 Zero copy mechanism will not be used. 00:11:50.812 [2024-11-21 04:09:50.580730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86468 ] 00:11:50.812 [2024-11-21 04:09:50.735785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.812 [2024-11-21 04:09:50.779081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.071 [2024-11-21 04:09:50.857545] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.071 [2024-11-21 04:09:50.857589] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.640 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.640 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:51.640 04:09:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.641 BaseBdev1_malloc 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.641 [2024-11-21 04:09:51.472540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:51.641 [2024-11-21 04:09:51.472707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.641 [2024-11-21 04:09:51.472746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:51.641 [2024-11-21 04:09:51.472761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.641 [2024-11-21 04:09:51.475581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.641 [2024-11-21 04:09:51.475624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:51.641 BaseBdev1 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.641 BaseBdev2_malloc 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.641 [2024-11-21 04:09:51.508133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:51.641 [2024-11-21 04:09:51.508199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.641 [2024-11-21 04:09:51.508240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:51.641 [2024-11-21 04:09:51.508252] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.641 [2024-11-21 04:09:51.510791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.641 [2024-11-21 04:09:51.510889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:51.641 BaseBdev2 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.641 spare_malloc 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.641 spare_delay 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.641 [2024-11-21 04:09:51.555368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:51.641 [2024-11-21 04:09:51.555435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.641 [2024-11-21 04:09:51.555462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:51.641 [2024-11-21 04:09:51.555473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.641 [2024-11-21 04:09:51.558176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.641 [2024-11-21 04:09:51.558301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:51.641 spare 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.641 [2024-11-21 04:09:51.567417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.641 [2024-11-21 04:09:51.569778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:51.641 [2024-11-21 04:09:51.570022] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:51.641 [2024-11-21 04:09:51.570040] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:51.641 [2024-11-21 04:09:51.570375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:51.641 [2024-11-21 04:09:51.570550] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:51.641 [2024-11-21 04:09:51.570573] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:51.641 [2024-11-21 04:09:51.570746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.641 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.901 04:09:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.901 "name": "raid_bdev1", 00:11:51.901 "uuid": "575ba3eb-2805-43e4-bd38-15923f46632d", 00:11:51.901 "strip_size_kb": 0, 00:11:51.901 "state": "online", 00:11:51.901 "raid_level": "raid1", 00:11:51.901 "superblock": true, 00:11:51.901 "num_base_bdevs": 2, 00:11:51.901 "num_base_bdevs_discovered": 2, 00:11:51.901 "num_base_bdevs_operational": 2, 00:11:51.901 "base_bdevs_list": [ 00:11:51.901 { 00:11:51.901 "name": "BaseBdev1", 00:11:51.901 "uuid": "05152f3c-4813-5816-8d60-8ab896d205ff", 00:11:51.901 "is_configured": true, 00:11:51.901 "data_offset": 2048, 00:11:51.901 "data_size": 63488 00:11:51.901 }, 00:11:51.901 { 00:11:51.901 "name": "BaseBdev2", 00:11:51.901 "uuid": "3d18b69f-20de-5d07-9fb2-5d05f1377d0e", 00:11:51.901 "is_configured": true, 00:11:51.901 "data_offset": 2048, 00:11:51.901 "data_size": 63488 00:11:51.901 } 00:11:51.901 ] 00:11:51.901 }' 00:11:51.901 04:09:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.901 04:09:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.160 04:09:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:52.160 04:09:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.160 04:09:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.160 04:09:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:52.160 [2024-11-21 04:09:52.031005] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:52.160 04:09:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.160 04:09:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:52.160 04:09:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.160 04:09:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:52.160 04:09:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.160 04:09:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.160 04:09:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.160 04:09:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:52.160 04:09:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:52.160 04:09:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:52.160 04:09:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:52.160 04:09:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:52.160 04:09:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:52.160 04:09:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:52.160 04:09:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:52.160 04:09:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:52.160 04:09:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:52.160 04:09:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:52.160 04:09:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:52.160 04:09:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:52.160 04:09:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:52.420 [2024-11-21 04:09:52.322260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:11:52.420 /dev/nbd0 00:11:52.420 04:09:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:52.420 04:09:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:52.420 04:09:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:52.420 04:09:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:52.420 04:09:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:52.420 04:09:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:52.420 04:09:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:52.420 04:09:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:52.420 04:09:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:52.420 04:09:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:52.420 04:09:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:52.420 1+0 records in 00:11:52.420 1+0 records out 00:11:52.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263896 s, 15.5 MB/s 00:11:52.420 04:09:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.420 04:09:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:52.420 04:09:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.420 04:09:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:52.420 04:09:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:52.420 04:09:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:52.420 04:09:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:52.420 04:09:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:52.420 04:09:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:52.420 04:09:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:11:56.615 63488+0 records in 00:11:56.615 63488+0 records out 00:11:56.615 32505856 bytes (33 MB, 31 MiB) copied, 4.18712 s, 7.8 MB/s 00:11:56.615 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:56.615 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:56.615 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:56.615 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:56.615 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:56.615 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:56.615 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:56.883 [2024-11-21 04:09:56.795475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.883 [2024-11-21 04:09:56.811568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.883 04:09:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.150 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.150 "name": "raid_bdev1", 00:11:57.150 "uuid": "575ba3eb-2805-43e4-bd38-15923f46632d", 00:11:57.150 "strip_size_kb": 0, 00:11:57.150 "state": "online", 00:11:57.150 "raid_level": "raid1", 00:11:57.150 "superblock": true, 00:11:57.150 "num_base_bdevs": 2, 00:11:57.150 "num_base_bdevs_discovered": 1, 00:11:57.150 "num_base_bdevs_operational": 1, 00:11:57.150 "base_bdevs_list": [ 00:11:57.150 { 00:11:57.150 "name": null, 00:11:57.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.150 "is_configured": false, 00:11:57.150 "data_offset": 0, 00:11:57.150 "data_size": 63488 00:11:57.150 }, 00:11:57.150 { 00:11:57.150 "name": "BaseBdev2", 00:11:57.150 "uuid": "3d18b69f-20de-5d07-9fb2-5d05f1377d0e", 00:11:57.150 "is_configured": true, 00:11:57.150 "data_offset": 2048, 00:11:57.150 "data_size": 63488 00:11:57.150 } 00:11:57.150 ] 00:11:57.150 }' 00:11:57.150 04:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.150 04:09:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.408 04:09:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:57.408 04:09:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.408 04:09:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.408 [2024-11-21 04:09:57.290861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:57.408 [2024-11-21 04:09:57.315644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e280 00:11:57.408 04:09:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.408 04:09:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:57.408 [2024-11-21 04:09:57.319147] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:58.790 "name": "raid_bdev1", 00:11:58.790 "uuid": "575ba3eb-2805-43e4-bd38-15923f46632d", 00:11:58.790 "strip_size_kb": 0, 00:11:58.790 "state": "online", 00:11:58.790 "raid_level": "raid1", 00:11:58.790 "superblock": true, 00:11:58.790 "num_base_bdevs": 2, 00:11:58.790 "num_base_bdevs_discovered": 2, 00:11:58.790 "num_base_bdevs_operational": 2, 00:11:58.790 "process": { 00:11:58.790 "type": "rebuild", 00:11:58.790 "target": "spare", 00:11:58.790 "progress": { 00:11:58.790 "blocks": 20480, 00:11:58.790 "percent": 32 00:11:58.790 } 00:11:58.790 }, 00:11:58.790 "base_bdevs_list": [ 00:11:58.790 { 00:11:58.790 "name": "spare", 00:11:58.790 "uuid": "d8b673a5-b755-53b0-b268-a013a999b358", 00:11:58.790 "is_configured": true, 00:11:58.790 "data_offset": 2048, 00:11:58.790 "data_size": 63488 00:11:58.790 }, 00:11:58.790 { 00:11:58.790 "name": "BaseBdev2", 00:11:58.790 "uuid": "3d18b69f-20de-5d07-9fb2-5d05f1377d0e", 00:11:58.790 "is_configured": true, 00:11:58.790 "data_offset": 2048, 00:11:58.790 "data_size": 63488 00:11:58.790 } 00:11:58.790 ] 00:11:58.790 }' 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.790 [2024-11-21 04:09:58.458592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:58.790 [2024-11-21 04:09:58.528212] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:58.790 [2024-11-21 04:09:58.528282] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.790 [2024-11-21 04:09:58.528303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:58.790 [2024-11-21 04:09:58.528311] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.790 "name": "raid_bdev1", 00:11:58.790 "uuid": "575ba3eb-2805-43e4-bd38-15923f46632d", 00:11:58.790 "strip_size_kb": 0, 00:11:58.790 "state": "online", 00:11:58.790 "raid_level": "raid1", 00:11:58.790 "superblock": true, 00:11:58.790 "num_base_bdevs": 2, 00:11:58.790 "num_base_bdevs_discovered": 1, 00:11:58.790 "num_base_bdevs_operational": 1, 00:11:58.790 "base_bdevs_list": [ 00:11:58.790 { 00:11:58.790 "name": null, 00:11:58.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.790 "is_configured": false, 00:11:58.790 "data_offset": 0, 00:11:58.790 "data_size": 63488 00:11:58.790 }, 00:11:58.790 { 00:11:58.790 "name": "BaseBdev2", 00:11:58.790 "uuid": "3d18b69f-20de-5d07-9fb2-5d05f1377d0e", 00:11:58.790 "is_configured": true, 00:11:58.790 "data_offset": 2048, 00:11:58.790 "data_size": 63488 00:11:58.790 } 00:11:58.790 ] 00:11:58.790 }' 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.790 04:09:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.050 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:59.050 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:59.050 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:59.050 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:59.050 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:59.050 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.050 04:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.050 04:09:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.050 04:09:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.050 04:09:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.050 04:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:59.050 "name": "raid_bdev1", 00:11:59.050 "uuid": "575ba3eb-2805-43e4-bd38-15923f46632d", 00:11:59.050 "strip_size_kb": 0, 00:11:59.050 "state": "online", 00:11:59.050 "raid_level": "raid1", 00:11:59.050 "superblock": true, 00:11:59.050 "num_base_bdevs": 2, 00:11:59.050 "num_base_bdevs_discovered": 1, 00:11:59.050 "num_base_bdevs_operational": 1, 00:11:59.050 "base_bdevs_list": [ 00:11:59.050 { 00:11:59.050 "name": null, 00:11:59.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.050 "is_configured": false, 00:11:59.050 "data_offset": 0, 00:11:59.050 "data_size": 63488 00:11:59.050 }, 00:11:59.050 { 00:11:59.050 "name": "BaseBdev2", 00:11:59.050 "uuid": "3d18b69f-20de-5d07-9fb2-5d05f1377d0e", 00:11:59.050 "is_configured": true, 00:11:59.050 "data_offset": 2048, 00:11:59.050 "data_size": 63488 00:11:59.050 } 00:11:59.050 ] 00:11:59.050 }' 00:11:59.050 04:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:59.310 04:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:59.310 04:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:59.310 04:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:59.310 04:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:59.310 04:09:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.310 04:09:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.310 [2024-11-21 04:09:59.068275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:59.310 [2024-11-21 04:09:59.077010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e350 00:11:59.310 04:09:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.310 04:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:59.310 [2024-11-21 04:09:59.079254] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:00.250 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:00.250 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:00.250 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:00.250 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:00.250 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:00.250 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.250 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.250 04:10:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.250 04:10:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.250 04:10:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.250 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:00.250 "name": "raid_bdev1", 00:12:00.250 "uuid": "575ba3eb-2805-43e4-bd38-15923f46632d", 00:12:00.250 "strip_size_kb": 0, 00:12:00.250 "state": "online", 00:12:00.250 "raid_level": "raid1", 00:12:00.250 "superblock": true, 00:12:00.250 "num_base_bdevs": 2, 00:12:00.250 "num_base_bdevs_discovered": 2, 00:12:00.250 "num_base_bdevs_operational": 2, 00:12:00.250 "process": { 00:12:00.250 "type": "rebuild", 00:12:00.250 "target": "spare", 00:12:00.250 "progress": { 00:12:00.250 "blocks": 20480, 00:12:00.250 "percent": 32 00:12:00.250 } 00:12:00.250 }, 00:12:00.250 "base_bdevs_list": [ 00:12:00.250 { 00:12:00.250 "name": "spare", 00:12:00.250 "uuid": "d8b673a5-b755-53b0-b268-a013a999b358", 00:12:00.250 "is_configured": true, 00:12:00.250 "data_offset": 2048, 00:12:00.250 "data_size": 63488 00:12:00.250 }, 00:12:00.250 { 00:12:00.250 "name": "BaseBdev2", 00:12:00.250 "uuid": "3d18b69f-20de-5d07-9fb2-5d05f1377d0e", 00:12:00.250 "is_configured": true, 00:12:00.250 "data_offset": 2048, 00:12:00.250 "data_size": 63488 00:12:00.250 } 00:12:00.250 ] 00:12:00.250 }' 00:12:00.250 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:00.250 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:00.250 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:00.510 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:00.510 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:00.510 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:00.510 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:00.510 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:00.510 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:00.510 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:00.510 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=316 00:12:00.510 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:00.510 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:00.510 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:00.510 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:00.510 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:00.510 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:00.510 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.510 04:10:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.510 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.510 04:10:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.510 04:10:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.510 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:00.510 "name": "raid_bdev1", 00:12:00.510 "uuid": "575ba3eb-2805-43e4-bd38-15923f46632d", 00:12:00.510 "strip_size_kb": 0, 00:12:00.510 "state": "online", 00:12:00.510 "raid_level": "raid1", 00:12:00.510 "superblock": true, 00:12:00.510 "num_base_bdevs": 2, 00:12:00.510 "num_base_bdevs_discovered": 2, 00:12:00.510 "num_base_bdevs_operational": 2, 00:12:00.510 "process": { 00:12:00.510 "type": "rebuild", 00:12:00.510 "target": "spare", 00:12:00.510 "progress": { 00:12:00.510 "blocks": 22528, 00:12:00.510 "percent": 35 00:12:00.510 } 00:12:00.510 }, 00:12:00.510 "base_bdevs_list": [ 00:12:00.510 { 00:12:00.510 "name": "spare", 00:12:00.510 "uuid": "d8b673a5-b755-53b0-b268-a013a999b358", 00:12:00.510 "is_configured": true, 00:12:00.510 "data_offset": 2048, 00:12:00.510 "data_size": 63488 00:12:00.510 }, 00:12:00.510 { 00:12:00.510 "name": "BaseBdev2", 00:12:00.510 "uuid": "3d18b69f-20de-5d07-9fb2-5d05f1377d0e", 00:12:00.510 "is_configured": true, 00:12:00.510 "data_offset": 2048, 00:12:00.510 "data_size": 63488 00:12:00.510 } 00:12:00.510 ] 00:12:00.510 }' 00:12:00.511 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:00.511 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:00.511 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:00.511 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:00.511 04:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:01.451 04:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:01.451 04:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:01.451 04:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.451 04:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:01.451 04:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:01.451 04:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.451 04:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.451 04:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.451 04:10:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.451 04:10:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.451 04:10:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.451 04:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.451 "name": "raid_bdev1", 00:12:01.451 "uuid": "575ba3eb-2805-43e4-bd38-15923f46632d", 00:12:01.451 "strip_size_kb": 0, 00:12:01.451 "state": "online", 00:12:01.451 "raid_level": "raid1", 00:12:01.451 "superblock": true, 00:12:01.451 "num_base_bdevs": 2, 00:12:01.451 "num_base_bdevs_discovered": 2, 00:12:01.451 "num_base_bdevs_operational": 2, 00:12:01.451 "process": { 00:12:01.451 "type": "rebuild", 00:12:01.451 "target": "spare", 00:12:01.451 "progress": { 00:12:01.451 "blocks": 45056, 00:12:01.451 "percent": 70 00:12:01.451 } 00:12:01.451 }, 00:12:01.451 "base_bdevs_list": [ 00:12:01.451 { 00:12:01.451 "name": "spare", 00:12:01.451 "uuid": "d8b673a5-b755-53b0-b268-a013a999b358", 00:12:01.451 "is_configured": true, 00:12:01.451 "data_offset": 2048, 00:12:01.451 "data_size": 63488 00:12:01.451 }, 00:12:01.451 { 00:12:01.451 "name": "BaseBdev2", 00:12:01.451 "uuid": "3d18b69f-20de-5d07-9fb2-5d05f1377d0e", 00:12:01.451 "is_configured": true, 00:12:01.451 "data_offset": 2048, 00:12:01.451 "data_size": 63488 00:12:01.451 } 00:12:01.451 ] 00:12:01.451 }' 00:12:01.451 04:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:01.712 04:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:01.712 04:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.712 04:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:01.712 04:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:02.282 [2024-11-21 04:10:02.200977] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:02.282 [2024-11-21 04:10:02.201164] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:02.282 [2024-11-21 04:10:02.201371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:02.851 "name": "raid_bdev1", 00:12:02.851 "uuid": "575ba3eb-2805-43e4-bd38-15923f46632d", 00:12:02.851 "strip_size_kb": 0, 00:12:02.851 "state": "online", 00:12:02.851 "raid_level": "raid1", 00:12:02.851 "superblock": true, 00:12:02.851 "num_base_bdevs": 2, 00:12:02.851 "num_base_bdevs_discovered": 2, 00:12:02.851 "num_base_bdevs_operational": 2, 00:12:02.851 "base_bdevs_list": [ 00:12:02.851 { 00:12:02.851 "name": "spare", 00:12:02.851 "uuid": "d8b673a5-b755-53b0-b268-a013a999b358", 00:12:02.851 "is_configured": true, 00:12:02.851 "data_offset": 2048, 00:12:02.851 "data_size": 63488 00:12:02.851 }, 00:12:02.851 { 00:12:02.851 "name": "BaseBdev2", 00:12:02.851 "uuid": "3d18b69f-20de-5d07-9fb2-5d05f1377d0e", 00:12:02.851 "is_configured": true, 00:12:02.851 "data_offset": 2048, 00:12:02.851 "data_size": 63488 00:12:02.851 } 00:12:02.851 ] 00:12:02.851 }' 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.851 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:02.851 "name": "raid_bdev1", 00:12:02.851 "uuid": "575ba3eb-2805-43e4-bd38-15923f46632d", 00:12:02.851 "strip_size_kb": 0, 00:12:02.851 "state": "online", 00:12:02.851 "raid_level": "raid1", 00:12:02.851 "superblock": true, 00:12:02.852 "num_base_bdevs": 2, 00:12:02.852 "num_base_bdevs_discovered": 2, 00:12:02.852 "num_base_bdevs_operational": 2, 00:12:02.852 "base_bdevs_list": [ 00:12:02.852 { 00:12:02.852 "name": "spare", 00:12:02.852 "uuid": "d8b673a5-b755-53b0-b268-a013a999b358", 00:12:02.852 "is_configured": true, 00:12:02.852 "data_offset": 2048, 00:12:02.852 "data_size": 63488 00:12:02.852 }, 00:12:02.852 { 00:12:02.852 "name": "BaseBdev2", 00:12:02.852 "uuid": "3d18b69f-20de-5d07-9fb2-5d05f1377d0e", 00:12:02.852 "is_configured": true, 00:12:02.852 "data_offset": 2048, 00:12:02.852 "data_size": 63488 00:12:02.852 } 00:12:02.852 ] 00:12:02.852 }' 00:12:02.852 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.852 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:02.852 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.852 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:02.852 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:02.852 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.852 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.852 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.852 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.852 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:02.852 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.852 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.852 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.852 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.852 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.852 04:10:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.852 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.852 04:10:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.852 04:10:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.111 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.111 "name": "raid_bdev1", 00:12:03.111 "uuid": "575ba3eb-2805-43e4-bd38-15923f46632d", 00:12:03.111 "strip_size_kb": 0, 00:12:03.111 "state": "online", 00:12:03.111 "raid_level": "raid1", 00:12:03.111 "superblock": true, 00:12:03.111 "num_base_bdevs": 2, 00:12:03.111 "num_base_bdevs_discovered": 2, 00:12:03.111 "num_base_bdevs_operational": 2, 00:12:03.111 "base_bdevs_list": [ 00:12:03.111 { 00:12:03.111 "name": "spare", 00:12:03.111 "uuid": "d8b673a5-b755-53b0-b268-a013a999b358", 00:12:03.111 "is_configured": true, 00:12:03.111 "data_offset": 2048, 00:12:03.111 "data_size": 63488 00:12:03.111 }, 00:12:03.111 { 00:12:03.111 "name": "BaseBdev2", 00:12:03.111 "uuid": "3d18b69f-20de-5d07-9fb2-5d05f1377d0e", 00:12:03.111 "is_configured": true, 00:12:03.111 "data_offset": 2048, 00:12:03.111 "data_size": 63488 00:12:03.111 } 00:12:03.111 ] 00:12:03.111 }' 00:12:03.111 04:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.111 04:10:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.372 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:03.372 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.372 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.372 [2024-11-21 04:10:03.228248] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:03.372 [2024-11-21 04:10:03.228342] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:03.372 [2024-11-21 04:10:03.228502] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.372 [2024-11-21 04:10:03.228587] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.372 [2024-11-21 04:10:03.228600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:03.372 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.372 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.372 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:03.372 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.372 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.372 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.372 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:03.372 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:03.372 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:03.372 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:03.372 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:03.372 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:03.372 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:03.372 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:03.372 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:03.372 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:03.372 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:03.372 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:03.372 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:03.632 /dev/nbd0 00:12:03.632 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:03.632 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:03.632 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:03.632 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:03.632 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:03.632 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:03.632 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:03.632 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:03.632 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:03.632 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:03.632 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:03.632 1+0 records in 00:12:03.632 1+0 records out 00:12:03.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051058 s, 8.0 MB/s 00:12:03.632 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.632 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:03.632 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.632 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:03.632 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:03.632 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:03.632 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:03.632 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:03.892 /dev/nbd1 00:12:03.892 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:03.892 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:03.892 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:03.892 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:03.892 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:03.892 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:03.892 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:03.893 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:03.893 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:03.893 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:03.893 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:03.893 1+0 records in 00:12:03.893 1+0 records out 00:12:03.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230749 s, 17.8 MB/s 00:12:03.893 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.893 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:03.893 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.893 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:03.893 04:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:03.893 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:03.893 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:03.893 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:04.152 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:04.152 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:04.152 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:04.152 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:04.152 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:04.152 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:04.152 04:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.413 [2024-11-21 04:10:04.361453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:04.413 [2024-11-21 04:10:04.361577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.413 [2024-11-21 04:10:04.361617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:04.413 [2024-11-21 04:10:04.361666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.413 [2024-11-21 04:10:04.364270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.413 [2024-11-21 04:10:04.364346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:04.413 [2024-11-21 04:10:04.364465] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:04.413 [2024-11-21 04:10:04.364547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:04.413 [2024-11-21 04:10:04.364743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:04.413 spare 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.413 04:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.674 [2024-11-21 04:10:04.464702] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:12:04.674 [2024-11-21 04:10:04.464727] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:04.674 [2024-11-21 04:10:04.465002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cae960 00:12:04.674 [2024-11-21 04:10:04.465172] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:12:04.674 [2024-11-21 04:10:04.465185] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:12:04.674 [2024-11-21 04:10:04.465345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.674 04:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.674 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:04.674 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.674 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.674 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.674 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.674 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:04.674 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.674 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.674 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.674 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.674 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.674 04:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.674 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.674 04:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.674 04:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.674 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.674 "name": "raid_bdev1", 00:12:04.674 "uuid": "575ba3eb-2805-43e4-bd38-15923f46632d", 00:12:04.674 "strip_size_kb": 0, 00:12:04.674 "state": "online", 00:12:04.674 "raid_level": "raid1", 00:12:04.674 "superblock": true, 00:12:04.674 "num_base_bdevs": 2, 00:12:04.674 "num_base_bdevs_discovered": 2, 00:12:04.674 "num_base_bdevs_operational": 2, 00:12:04.674 "base_bdevs_list": [ 00:12:04.674 { 00:12:04.674 "name": "spare", 00:12:04.674 "uuid": "d8b673a5-b755-53b0-b268-a013a999b358", 00:12:04.674 "is_configured": true, 00:12:04.674 "data_offset": 2048, 00:12:04.674 "data_size": 63488 00:12:04.674 }, 00:12:04.674 { 00:12:04.674 "name": "BaseBdev2", 00:12:04.674 "uuid": "3d18b69f-20de-5d07-9fb2-5d05f1377d0e", 00:12:04.674 "is_configured": true, 00:12:04.674 "data_offset": 2048, 00:12:04.674 "data_size": 63488 00:12:04.674 } 00:12:04.674 ] 00:12:04.674 }' 00:12:04.674 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.674 04:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.244 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:05.244 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.244 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:05.244 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:05.244 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.244 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.244 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.244 04:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.244 04:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.244 04:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.244 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.244 "name": "raid_bdev1", 00:12:05.244 "uuid": "575ba3eb-2805-43e4-bd38-15923f46632d", 00:12:05.244 "strip_size_kb": 0, 00:12:05.244 "state": "online", 00:12:05.244 "raid_level": "raid1", 00:12:05.244 "superblock": true, 00:12:05.244 "num_base_bdevs": 2, 00:12:05.244 "num_base_bdevs_discovered": 2, 00:12:05.244 "num_base_bdevs_operational": 2, 00:12:05.244 "base_bdevs_list": [ 00:12:05.244 { 00:12:05.244 "name": "spare", 00:12:05.244 "uuid": "d8b673a5-b755-53b0-b268-a013a999b358", 00:12:05.244 "is_configured": true, 00:12:05.244 "data_offset": 2048, 00:12:05.244 "data_size": 63488 00:12:05.244 }, 00:12:05.244 { 00:12:05.244 "name": "BaseBdev2", 00:12:05.244 "uuid": "3d18b69f-20de-5d07-9fb2-5d05f1377d0e", 00:12:05.244 "is_configured": true, 00:12:05.245 "data_offset": 2048, 00:12:05.245 "data_size": 63488 00:12:05.245 } 00:12:05.245 ] 00:12:05.245 }' 00:12:05.245 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.245 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:05.245 04:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.245 [2024-11-21 04:10:05.088273] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.245 "name": "raid_bdev1", 00:12:05.245 "uuid": "575ba3eb-2805-43e4-bd38-15923f46632d", 00:12:05.245 "strip_size_kb": 0, 00:12:05.245 "state": "online", 00:12:05.245 "raid_level": "raid1", 00:12:05.245 "superblock": true, 00:12:05.245 "num_base_bdevs": 2, 00:12:05.245 "num_base_bdevs_discovered": 1, 00:12:05.245 "num_base_bdevs_operational": 1, 00:12:05.245 "base_bdevs_list": [ 00:12:05.245 { 00:12:05.245 "name": null, 00:12:05.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.245 "is_configured": false, 00:12:05.245 "data_offset": 0, 00:12:05.245 "data_size": 63488 00:12:05.245 }, 00:12:05.245 { 00:12:05.245 "name": "BaseBdev2", 00:12:05.245 "uuid": "3d18b69f-20de-5d07-9fb2-5d05f1377d0e", 00:12:05.245 "is_configured": true, 00:12:05.245 "data_offset": 2048, 00:12:05.245 "data_size": 63488 00:12:05.245 } 00:12:05.245 ] 00:12:05.245 }' 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.245 04:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.814 04:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:05.814 04:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.814 04:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.814 [2024-11-21 04:10:05.512005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:05.814 [2024-11-21 04:10:05.512347] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:05.814 [2024-11-21 04:10:05.512434] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:05.814 [2024-11-21 04:10:05.512518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:05.814 [2024-11-21 04:10:05.521375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caea30 00:12:05.814 04:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.814 04:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:05.814 [2024-11-21 04:10:05.523705] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:06.754 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:06.754 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.754 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:06.754 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:06.754 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.754 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.754 04:10:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.754 04:10:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.754 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.754 04:10:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.754 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:06.754 "name": "raid_bdev1", 00:12:06.754 "uuid": "575ba3eb-2805-43e4-bd38-15923f46632d", 00:12:06.754 "strip_size_kb": 0, 00:12:06.754 "state": "online", 00:12:06.754 "raid_level": "raid1", 00:12:06.754 "superblock": true, 00:12:06.754 "num_base_bdevs": 2, 00:12:06.754 "num_base_bdevs_discovered": 2, 00:12:06.754 "num_base_bdevs_operational": 2, 00:12:06.754 "process": { 00:12:06.754 "type": "rebuild", 00:12:06.754 "target": "spare", 00:12:06.754 "progress": { 00:12:06.754 "blocks": 20480, 00:12:06.754 "percent": 32 00:12:06.754 } 00:12:06.754 }, 00:12:06.754 "base_bdevs_list": [ 00:12:06.754 { 00:12:06.754 "name": "spare", 00:12:06.754 "uuid": "d8b673a5-b755-53b0-b268-a013a999b358", 00:12:06.754 "is_configured": true, 00:12:06.754 "data_offset": 2048, 00:12:06.754 "data_size": 63488 00:12:06.754 }, 00:12:06.754 { 00:12:06.754 "name": "BaseBdev2", 00:12:06.754 "uuid": "3d18b69f-20de-5d07-9fb2-5d05f1377d0e", 00:12:06.754 "is_configured": true, 00:12:06.754 "data_offset": 2048, 00:12:06.754 "data_size": 63488 00:12:06.754 } 00:12:06.754 ] 00:12:06.754 }' 00:12:06.754 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:06.754 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:06.754 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:06.755 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:06.755 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:06.755 04:10:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.755 04:10:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.755 [2024-11-21 04:10:06.664242] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:07.015 [2024-11-21 04:10:06.731780] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:07.015 [2024-11-21 04:10:06.731928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.015 [2024-11-21 04:10:06.731950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:07.015 [2024-11-21 04:10:06.731959] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:07.015 04:10:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.015 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:07.015 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.015 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.015 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.015 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.015 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:07.015 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.015 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.015 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.015 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.015 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.015 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.015 04:10:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.015 04:10:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.015 04:10:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.015 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.015 "name": "raid_bdev1", 00:12:07.015 "uuid": "575ba3eb-2805-43e4-bd38-15923f46632d", 00:12:07.015 "strip_size_kb": 0, 00:12:07.015 "state": "online", 00:12:07.015 "raid_level": "raid1", 00:12:07.015 "superblock": true, 00:12:07.015 "num_base_bdevs": 2, 00:12:07.015 "num_base_bdevs_discovered": 1, 00:12:07.015 "num_base_bdevs_operational": 1, 00:12:07.015 "base_bdevs_list": [ 00:12:07.015 { 00:12:07.015 "name": null, 00:12:07.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.015 "is_configured": false, 00:12:07.015 "data_offset": 0, 00:12:07.015 "data_size": 63488 00:12:07.015 }, 00:12:07.015 { 00:12:07.015 "name": "BaseBdev2", 00:12:07.015 "uuid": "3d18b69f-20de-5d07-9fb2-5d05f1377d0e", 00:12:07.015 "is_configured": true, 00:12:07.015 "data_offset": 2048, 00:12:07.015 "data_size": 63488 00:12:07.015 } 00:12:07.015 ] 00:12:07.015 }' 00:12:07.015 04:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.015 04:10:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.275 04:10:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:07.275 04:10:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.275 04:10:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.275 [2024-11-21 04:10:07.199476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:07.275 [2024-11-21 04:10:07.199543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.275 [2024-11-21 04:10:07.199573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:07.275 [2024-11-21 04:10:07.199583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.275 [2024-11-21 04:10:07.200098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.275 [2024-11-21 04:10:07.200129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:07.275 [2024-11-21 04:10:07.200275] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:07.275 [2024-11-21 04:10:07.200290] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:07.275 [2024-11-21 04:10:07.200313] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:07.275 [2024-11-21 04:10:07.200349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:07.275 [2024-11-21 04:10:07.208119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeb00 00:12:07.275 spare 00:12:07.275 04:10:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.275 04:10:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:07.275 [2024-11-21 04:10:07.210276] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.657 "name": "raid_bdev1", 00:12:08.657 "uuid": "575ba3eb-2805-43e4-bd38-15923f46632d", 00:12:08.657 "strip_size_kb": 0, 00:12:08.657 "state": "online", 00:12:08.657 "raid_level": "raid1", 00:12:08.657 "superblock": true, 00:12:08.657 "num_base_bdevs": 2, 00:12:08.657 "num_base_bdevs_discovered": 2, 00:12:08.657 "num_base_bdevs_operational": 2, 00:12:08.657 "process": { 00:12:08.657 "type": "rebuild", 00:12:08.657 "target": "spare", 00:12:08.657 "progress": { 00:12:08.657 "blocks": 20480, 00:12:08.657 "percent": 32 00:12:08.657 } 00:12:08.657 }, 00:12:08.657 "base_bdevs_list": [ 00:12:08.657 { 00:12:08.657 "name": "spare", 00:12:08.657 "uuid": "d8b673a5-b755-53b0-b268-a013a999b358", 00:12:08.657 "is_configured": true, 00:12:08.657 "data_offset": 2048, 00:12:08.657 "data_size": 63488 00:12:08.657 }, 00:12:08.657 { 00:12:08.657 "name": "BaseBdev2", 00:12:08.657 "uuid": "3d18b69f-20de-5d07-9fb2-5d05f1377d0e", 00:12:08.657 "is_configured": true, 00:12:08.657 "data_offset": 2048, 00:12:08.657 "data_size": 63488 00:12:08.657 } 00:12:08.657 ] 00:12:08.657 }' 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.657 [2024-11-21 04:10:08.354464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:08.657 [2024-11-21 04:10:08.418279] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:08.657 [2024-11-21 04:10:08.418343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.657 [2024-11-21 04:10:08.418358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:08.657 [2024-11-21 04:10:08.418368] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.657 "name": "raid_bdev1", 00:12:08.657 "uuid": "575ba3eb-2805-43e4-bd38-15923f46632d", 00:12:08.657 "strip_size_kb": 0, 00:12:08.657 "state": "online", 00:12:08.657 "raid_level": "raid1", 00:12:08.657 "superblock": true, 00:12:08.657 "num_base_bdevs": 2, 00:12:08.657 "num_base_bdevs_discovered": 1, 00:12:08.657 "num_base_bdevs_operational": 1, 00:12:08.657 "base_bdevs_list": [ 00:12:08.657 { 00:12:08.657 "name": null, 00:12:08.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.657 "is_configured": false, 00:12:08.657 "data_offset": 0, 00:12:08.657 "data_size": 63488 00:12:08.657 }, 00:12:08.657 { 00:12:08.657 "name": "BaseBdev2", 00:12:08.657 "uuid": "3d18b69f-20de-5d07-9fb2-5d05f1377d0e", 00:12:08.657 "is_configured": true, 00:12:08.657 "data_offset": 2048, 00:12:08.657 "data_size": 63488 00:12:08.657 } 00:12:08.657 ] 00:12:08.657 }' 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.657 04:10:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.917 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:08.917 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.917 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:08.917 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:08.917 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.917 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.917 04:10:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.917 04:10:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.917 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.917 04:10:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.178 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:09.178 "name": "raid_bdev1", 00:12:09.178 "uuid": "575ba3eb-2805-43e4-bd38-15923f46632d", 00:12:09.178 "strip_size_kb": 0, 00:12:09.178 "state": "online", 00:12:09.178 "raid_level": "raid1", 00:12:09.178 "superblock": true, 00:12:09.178 "num_base_bdevs": 2, 00:12:09.178 "num_base_bdevs_discovered": 1, 00:12:09.178 "num_base_bdevs_operational": 1, 00:12:09.178 "base_bdevs_list": [ 00:12:09.178 { 00:12:09.178 "name": null, 00:12:09.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.178 "is_configured": false, 00:12:09.178 "data_offset": 0, 00:12:09.178 "data_size": 63488 00:12:09.178 }, 00:12:09.178 { 00:12:09.178 "name": "BaseBdev2", 00:12:09.178 "uuid": "3d18b69f-20de-5d07-9fb2-5d05f1377d0e", 00:12:09.178 "is_configured": true, 00:12:09.178 "data_offset": 2048, 00:12:09.178 "data_size": 63488 00:12:09.178 } 00:12:09.178 ] 00:12:09.178 }' 00:12:09.178 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:09.178 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:09.178 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:09.178 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:09.178 04:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:09.178 04:10:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.178 04:10:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.178 04:10:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.178 04:10:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:09.178 04:10:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.178 04:10:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.178 [2024-11-21 04:10:09.009322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:09.178 [2024-11-21 04:10:09.009383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.178 [2024-11-21 04:10:09.009408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:09.178 [2024-11-21 04:10:09.009420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.178 [2024-11-21 04:10:09.009869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.178 [2024-11-21 04:10:09.009902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:09.178 [2024-11-21 04:10:09.009996] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:09.178 [2024-11-21 04:10:09.010020] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:09.178 [2024-11-21 04:10:09.010050] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:09.178 [2024-11-21 04:10:09.010074] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:09.178 BaseBdev1 00:12:09.178 04:10:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.178 04:10:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:10.171 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:10.171 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.171 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.171 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.171 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.171 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:10.171 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.171 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.171 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.171 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.171 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.171 04:10:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.171 04:10:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.171 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.171 04:10:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.171 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.171 "name": "raid_bdev1", 00:12:10.171 "uuid": "575ba3eb-2805-43e4-bd38-15923f46632d", 00:12:10.171 "strip_size_kb": 0, 00:12:10.171 "state": "online", 00:12:10.171 "raid_level": "raid1", 00:12:10.171 "superblock": true, 00:12:10.171 "num_base_bdevs": 2, 00:12:10.171 "num_base_bdevs_discovered": 1, 00:12:10.171 "num_base_bdevs_operational": 1, 00:12:10.171 "base_bdevs_list": [ 00:12:10.171 { 00:12:10.171 "name": null, 00:12:10.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.171 "is_configured": false, 00:12:10.171 "data_offset": 0, 00:12:10.171 "data_size": 63488 00:12:10.171 }, 00:12:10.171 { 00:12:10.171 "name": "BaseBdev2", 00:12:10.171 "uuid": "3d18b69f-20de-5d07-9fb2-5d05f1377d0e", 00:12:10.171 "is_configured": true, 00:12:10.171 "data_offset": 2048, 00:12:10.171 "data_size": 63488 00:12:10.171 } 00:12:10.171 ] 00:12:10.171 }' 00:12:10.171 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.171 04:10:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.742 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:10.742 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.742 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:10.742 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:10.742 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.742 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.742 04:10:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.742 04:10:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.742 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.742 04:10:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.742 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.742 "name": "raid_bdev1", 00:12:10.742 "uuid": "575ba3eb-2805-43e4-bd38-15923f46632d", 00:12:10.742 "strip_size_kb": 0, 00:12:10.742 "state": "online", 00:12:10.742 "raid_level": "raid1", 00:12:10.742 "superblock": true, 00:12:10.742 "num_base_bdevs": 2, 00:12:10.742 "num_base_bdevs_discovered": 1, 00:12:10.742 "num_base_bdevs_operational": 1, 00:12:10.742 "base_bdevs_list": [ 00:12:10.742 { 00:12:10.742 "name": null, 00:12:10.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.742 "is_configured": false, 00:12:10.742 "data_offset": 0, 00:12:10.742 "data_size": 63488 00:12:10.742 }, 00:12:10.742 { 00:12:10.742 "name": "BaseBdev2", 00:12:10.742 "uuid": "3d18b69f-20de-5d07-9fb2-5d05f1377d0e", 00:12:10.742 "is_configured": true, 00:12:10.742 "data_offset": 2048, 00:12:10.742 "data_size": 63488 00:12:10.742 } 00:12:10.742 ] 00:12:10.742 }' 00:12:10.742 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.742 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:10.742 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.742 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:10.742 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:10.742 04:10:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:10.742 04:10:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:10.743 04:10:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:10.743 04:10:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:10.743 04:10:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:10.743 04:10:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:10.743 04:10:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:10.743 04:10:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.743 04:10:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.743 [2024-11-21 04:10:10.602613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:10.743 [2024-11-21 04:10:10.602834] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:10.743 [2024-11-21 04:10:10.602849] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:10.743 request: 00:12:10.743 { 00:12:10.743 "base_bdev": "BaseBdev1", 00:12:10.743 "raid_bdev": "raid_bdev1", 00:12:10.743 "method": "bdev_raid_add_base_bdev", 00:12:10.743 "req_id": 1 00:12:10.743 } 00:12:10.743 Got JSON-RPC error response 00:12:10.743 response: 00:12:10.743 { 00:12:10.743 "code": -22, 00:12:10.743 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:10.743 } 00:12:10.743 04:10:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:10.743 04:10:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:10.743 04:10:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:10.743 04:10:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:10.743 04:10:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:10.743 04:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:11.683 04:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:11.683 04:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.683 04:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.683 04:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.683 04:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.683 04:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:11.683 04:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.683 04:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.683 04:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.683 04:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.683 04:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.683 04:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.683 04:10:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.683 04:10:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.683 04:10:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.943 04:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.943 "name": "raid_bdev1", 00:12:11.943 "uuid": "575ba3eb-2805-43e4-bd38-15923f46632d", 00:12:11.943 "strip_size_kb": 0, 00:12:11.943 "state": "online", 00:12:11.943 "raid_level": "raid1", 00:12:11.943 "superblock": true, 00:12:11.943 "num_base_bdevs": 2, 00:12:11.943 "num_base_bdevs_discovered": 1, 00:12:11.943 "num_base_bdevs_operational": 1, 00:12:11.943 "base_bdevs_list": [ 00:12:11.943 { 00:12:11.943 "name": null, 00:12:11.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.943 "is_configured": false, 00:12:11.943 "data_offset": 0, 00:12:11.943 "data_size": 63488 00:12:11.943 }, 00:12:11.943 { 00:12:11.944 "name": "BaseBdev2", 00:12:11.944 "uuid": "3d18b69f-20de-5d07-9fb2-5d05f1377d0e", 00:12:11.944 "is_configured": true, 00:12:11.944 "data_offset": 2048, 00:12:11.944 "data_size": 63488 00:12:11.944 } 00:12:11.944 ] 00:12:11.944 }' 00:12:11.944 04:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.944 04:10:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.204 04:10:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:12.204 04:10:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.204 04:10:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:12.204 04:10:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:12.204 04:10:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.204 04:10:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.204 04:10:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.204 04:10:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.204 04:10:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.204 04:10:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.204 04:10:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.204 "name": "raid_bdev1", 00:12:12.204 "uuid": "575ba3eb-2805-43e4-bd38-15923f46632d", 00:12:12.204 "strip_size_kb": 0, 00:12:12.204 "state": "online", 00:12:12.204 "raid_level": "raid1", 00:12:12.204 "superblock": true, 00:12:12.204 "num_base_bdevs": 2, 00:12:12.204 "num_base_bdevs_discovered": 1, 00:12:12.204 "num_base_bdevs_operational": 1, 00:12:12.204 "base_bdevs_list": [ 00:12:12.204 { 00:12:12.204 "name": null, 00:12:12.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.204 "is_configured": false, 00:12:12.204 "data_offset": 0, 00:12:12.204 "data_size": 63488 00:12:12.204 }, 00:12:12.204 { 00:12:12.204 "name": "BaseBdev2", 00:12:12.204 "uuid": "3d18b69f-20de-5d07-9fb2-5d05f1377d0e", 00:12:12.204 "is_configured": true, 00:12:12.204 "data_offset": 2048, 00:12:12.204 "data_size": 63488 00:12:12.204 } 00:12:12.205 ] 00:12:12.205 }' 00:12:12.205 04:10:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.205 04:10:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:12.205 04:10:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.205 04:10:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:12.205 04:10:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86468 00:12:12.205 04:10:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 86468 ']' 00:12:12.205 04:10:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 86468 00:12:12.465 04:10:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:12.465 04:10:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.465 04:10:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86468 00:12:12.465 04:10:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:12.465 04:10:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:12.465 04:10:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86468' 00:12:12.465 killing process with pid 86468 00:12:12.465 Received shutdown signal, test time was about 60.000000 seconds 00:12:12.465 00:12:12.465 Latency(us) 00:12:12.465 [2024-11-21T04:10:12.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:12.465 [2024-11-21T04:10:12.438Z] =================================================================================================================== 00:12:12.465 [2024-11-21T04:10:12.438Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:12.465 04:10:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 86468 00:12:12.465 [2024-11-21 04:10:12.214541] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:12.465 04:10:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 86468 00:12:12.465 [2024-11-21 04:10:12.214719] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:12.465 [2024-11-21 04:10:12.214786] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:12.465 [2024-11-21 04:10:12.214796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:12:12.465 [2024-11-21 04:10:12.271876] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:12.726 ************************************ 00:12:12.726 00:12:12.726 real 0m22.103s 00:12:12.726 user 0m27.095s 00:12:12.726 sys 0m3.847s 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.726 END TEST raid_rebuild_test_sb 00:12:12.726 ************************************ 00:12:12.726 04:10:12 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:12.726 04:10:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:12.726 04:10:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.726 04:10:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:12.726 ************************************ 00:12:12.726 START TEST raid_rebuild_test_io 00:12:12.726 ************************************ 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87184 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87184 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 87184 ']' 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.726 04:10:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:12.986 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:12.986 Zero copy mechanism will not be used. 00:12:12.986 [2024-11-21 04:10:12.763344] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:12:12.986 [2024-11-21 04:10:12.763479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87184 ] 00:12:12.986 [2024-11-21 04:10:12.916884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.246 [2024-11-21 04:10:12.959124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.246 [2024-11-21 04:10:13.036629] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.246 [2024-11-21 04:10:13.036686] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.817 BaseBdev1_malloc 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.817 [2024-11-21 04:10:13.619290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:13.817 [2024-11-21 04:10:13.619360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.817 [2024-11-21 04:10:13.619391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:13.817 [2024-11-21 04:10:13.619404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.817 [2024-11-21 04:10:13.621954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.817 [2024-11-21 04:10:13.621990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:13.817 BaseBdev1 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.817 BaseBdev2_malloc 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.817 [2024-11-21 04:10:13.653812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:13.817 [2024-11-21 04:10:13.653933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.817 [2024-11-21 04:10:13.653960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:13.817 [2024-11-21 04:10:13.653970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.817 [2024-11-21 04:10:13.656360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.817 [2024-11-21 04:10:13.656396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:13.817 BaseBdev2 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.817 spare_malloc 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.817 spare_delay 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.817 [2024-11-21 04:10:13.700168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:13.817 [2024-11-21 04:10:13.700233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.817 [2024-11-21 04:10:13.700257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:13.817 [2024-11-21 04:10:13.700266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.817 [2024-11-21 04:10:13.702619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.817 [2024-11-21 04:10:13.702652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:13.817 spare 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.817 [2024-11-21 04:10:13.712208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.817 [2024-11-21 04:10:13.714317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:13.817 [2024-11-21 04:10:13.714407] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:13.817 [2024-11-21 04:10:13.714417] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:13.817 [2024-11-21 04:10:13.714695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:12:13.817 [2024-11-21 04:10:13.714833] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:13.817 [2024-11-21 04:10:13.714855] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:13.817 [2024-11-21 04:10:13.714979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.817 04:10:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.817 "name": "raid_bdev1", 00:12:13.817 "uuid": "57e66a71-0dc2-45c3-9b71-155e23a15240", 00:12:13.817 "strip_size_kb": 0, 00:12:13.818 "state": "online", 00:12:13.818 "raid_level": "raid1", 00:12:13.818 "superblock": false, 00:12:13.818 "num_base_bdevs": 2, 00:12:13.818 "num_base_bdevs_discovered": 2, 00:12:13.818 "num_base_bdevs_operational": 2, 00:12:13.818 "base_bdevs_list": [ 00:12:13.818 { 00:12:13.818 "name": "BaseBdev1", 00:12:13.818 "uuid": "293bc9ed-f52a-500b-b39d-bbc79f260a00", 00:12:13.818 "is_configured": true, 00:12:13.818 "data_offset": 0, 00:12:13.818 "data_size": 65536 00:12:13.818 }, 00:12:13.818 { 00:12:13.818 "name": "BaseBdev2", 00:12:13.818 "uuid": "7f476b9d-89f7-5414-94ca-42a6b963be8f", 00:12:13.818 "is_configured": true, 00:12:13.818 "data_offset": 0, 00:12:13.818 "data_size": 65536 00:12:13.818 } 00:12:13.818 ] 00:12:13.818 }' 00:12:13.818 04:10:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.818 04:10:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.204 04:10:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:14.204 04:10:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:14.204 04:10:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.204 04:10:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.204 [2024-11-21 04:10:14.144008] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.204 04:10:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.465 04:10:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:14.465 04:10:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.465 04:10:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:14.465 04:10:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.465 04:10:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.465 04:10:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.465 04:10:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:14.465 04:10:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:14.465 04:10:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:14.465 04:10:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:14.465 04:10:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.465 04:10:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.465 [2024-11-21 04:10:14.235588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:14.465 04:10:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.465 04:10:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:14.465 04:10:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.465 04:10:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.465 04:10:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.465 04:10:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.465 04:10:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:14.465 04:10:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.465 04:10:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.465 04:10:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.465 04:10:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.466 04:10:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.466 04:10:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.466 04:10:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.466 04:10:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.466 04:10:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.466 04:10:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.466 "name": "raid_bdev1", 00:12:14.466 "uuid": "57e66a71-0dc2-45c3-9b71-155e23a15240", 00:12:14.466 "strip_size_kb": 0, 00:12:14.466 "state": "online", 00:12:14.466 "raid_level": "raid1", 00:12:14.466 "superblock": false, 00:12:14.466 "num_base_bdevs": 2, 00:12:14.466 "num_base_bdevs_discovered": 1, 00:12:14.466 "num_base_bdevs_operational": 1, 00:12:14.466 "base_bdevs_list": [ 00:12:14.466 { 00:12:14.466 "name": null, 00:12:14.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.466 "is_configured": false, 00:12:14.466 "data_offset": 0, 00:12:14.466 "data_size": 65536 00:12:14.466 }, 00:12:14.466 { 00:12:14.466 "name": "BaseBdev2", 00:12:14.466 "uuid": "7f476b9d-89f7-5414-94ca-42a6b963be8f", 00:12:14.466 "is_configured": true, 00:12:14.466 "data_offset": 0, 00:12:14.466 "data_size": 65536 00:12:14.466 } 00:12:14.466 ] 00:12:14.466 }' 00:12:14.466 04:10:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.466 04:10:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.466 [2024-11-21 04:10:14.310967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:12:14.466 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:14.466 Zero copy mechanism will not be used. 00:12:14.466 Running I/O for 60 seconds... 00:12:14.727 04:10:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:14.727 04:10:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.727 04:10:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.727 [2024-11-21 04:10:14.650200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:14.727 04:10:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.727 04:10:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:14.727 [2024-11-21 04:10:14.693889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:12:14.727 [2024-11-21 04:10:14.696366] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:14.989 [2024-11-21 04:10:14.803785] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:14.989 [2024-11-21 04:10:14.804578] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:15.249 [2024-11-21 04:10:15.024862] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:15.249 [2024-11-21 04:10:15.025560] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:15.507 195.00 IOPS, 585.00 MiB/s [2024-11-21T04:10:15.480Z] [2024-11-21 04:10:15.370388] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:15.507 [2024-11-21 04:10:15.371163] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:15.766 [2024-11-21 04:10:15.592819] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:15.766 04:10:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:15.766 04:10:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.766 04:10:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:15.766 04:10:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:15.766 04:10:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.766 04:10:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.766 04:10:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.766 04:10:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.766 04:10:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.766 04:10:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.027 04:10:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:16.027 "name": "raid_bdev1", 00:12:16.027 "uuid": "57e66a71-0dc2-45c3-9b71-155e23a15240", 00:12:16.027 "strip_size_kb": 0, 00:12:16.027 "state": "online", 00:12:16.027 "raid_level": "raid1", 00:12:16.027 "superblock": false, 00:12:16.027 "num_base_bdevs": 2, 00:12:16.027 "num_base_bdevs_discovered": 2, 00:12:16.027 "num_base_bdevs_operational": 2, 00:12:16.027 "process": { 00:12:16.027 "type": "rebuild", 00:12:16.027 "target": "spare", 00:12:16.027 "progress": { 00:12:16.027 "blocks": 10240, 00:12:16.027 "percent": 15 00:12:16.027 } 00:12:16.027 }, 00:12:16.027 "base_bdevs_list": [ 00:12:16.027 { 00:12:16.027 "name": "spare", 00:12:16.027 "uuid": "ee58bd27-ea6f-563d-a5f5-d1338508f751", 00:12:16.027 "is_configured": true, 00:12:16.027 "data_offset": 0, 00:12:16.027 "data_size": 65536 00:12:16.027 }, 00:12:16.027 { 00:12:16.027 "name": "BaseBdev2", 00:12:16.027 "uuid": "7f476b9d-89f7-5414-94ca-42a6b963be8f", 00:12:16.027 "is_configured": true, 00:12:16.027 "data_offset": 0, 00:12:16.027 "data_size": 65536 00:12:16.027 } 00:12:16.027 ] 00:12:16.027 }' 00:12:16.027 04:10:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.027 04:10:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:16.027 04:10:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.027 [2024-11-21 04:10:15.840135] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:16.027 04:10:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:16.027 04:10:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:16.027 04:10:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.027 04:10:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.027 [2024-11-21 04:10:15.852182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:16.027 [2024-11-21 04:10:15.946010] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:16.287 [2024-11-21 04:10:16.065804] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:16.287 [2024-11-21 04:10:16.074786] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.287 [2024-11-21 04:10:16.074837] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:16.287 [2024-11-21 04:10:16.074855] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:16.287 [2024-11-21 04:10:16.101533] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:12:16.287 04:10:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.287 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:16.287 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.287 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.287 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.287 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.287 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:16.287 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.287 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.287 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.287 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.287 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.287 04:10:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.287 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.287 04:10:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.287 04:10:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.287 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.287 "name": "raid_bdev1", 00:12:16.287 "uuid": "57e66a71-0dc2-45c3-9b71-155e23a15240", 00:12:16.287 "strip_size_kb": 0, 00:12:16.287 "state": "online", 00:12:16.287 "raid_level": "raid1", 00:12:16.287 "superblock": false, 00:12:16.287 "num_base_bdevs": 2, 00:12:16.287 "num_base_bdevs_discovered": 1, 00:12:16.287 "num_base_bdevs_operational": 1, 00:12:16.287 "base_bdevs_list": [ 00:12:16.287 { 00:12:16.287 "name": null, 00:12:16.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.287 "is_configured": false, 00:12:16.287 "data_offset": 0, 00:12:16.287 "data_size": 65536 00:12:16.288 }, 00:12:16.288 { 00:12:16.288 "name": "BaseBdev2", 00:12:16.288 "uuid": "7f476b9d-89f7-5414-94ca-42a6b963be8f", 00:12:16.288 "is_configured": true, 00:12:16.288 "data_offset": 0, 00:12:16.288 "data_size": 65536 00:12:16.288 } 00:12:16.288 ] 00:12:16.288 }' 00:12:16.288 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.288 04:10:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.808 153.50 IOPS, 460.50 MiB/s [2024-11-21T04:10:16.781Z] 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:16.808 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.809 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:16.809 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:16.809 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:16.809 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.809 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.809 04:10:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.809 04:10:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.809 04:10:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.809 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:16.809 "name": "raid_bdev1", 00:12:16.809 "uuid": "57e66a71-0dc2-45c3-9b71-155e23a15240", 00:12:16.809 "strip_size_kb": 0, 00:12:16.809 "state": "online", 00:12:16.809 "raid_level": "raid1", 00:12:16.809 "superblock": false, 00:12:16.809 "num_base_bdevs": 2, 00:12:16.809 "num_base_bdevs_discovered": 1, 00:12:16.809 "num_base_bdevs_operational": 1, 00:12:16.809 "base_bdevs_list": [ 00:12:16.809 { 00:12:16.809 "name": null, 00:12:16.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.809 "is_configured": false, 00:12:16.809 "data_offset": 0, 00:12:16.809 "data_size": 65536 00:12:16.809 }, 00:12:16.809 { 00:12:16.809 "name": "BaseBdev2", 00:12:16.809 "uuid": "7f476b9d-89f7-5414-94ca-42a6b963be8f", 00:12:16.809 "is_configured": true, 00:12:16.809 "data_offset": 0, 00:12:16.809 "data_size": 65536 00:12:16.809 } 00:12:16.809 ] 00:12:16.809 }' 00:12:16.809 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.809 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:16.809 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.809 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:16.809 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:16.809 04:10:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.809 04:10:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.809 [2024-11-21 04:10:16.736023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:16.809 04:10:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.809 04:10:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:16.809 [2024-11-21 04:10:16.773309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:16.809 [2024-11-21 04:10:16.775616] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:17.069 [2024-11-21 04:10:16.890477] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:17.069 [2024-11-21 04:10:16.891267] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:17.329 [2024-11-21 04:10:17.111290] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:17.329 [2024-11-21 04:10:17.111924] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:17.590 179.33 IOPS, 538.00 MiB/s [2024-11-21T04:10:17.563Z] [2024-11-21 04:10:17.534650] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:17.850 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:17.850 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.850 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:17.850 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:17.850 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.850 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.850 04:10:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.850 04:10:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.850 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.850 04:10:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.110 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.110 "name": "raid_bdev1", 00:12:18.110 "uuid": "57e66a71-0dc2-45c3-9b71-155e23a15240", 00:12:18.110 "strip_size_kb": 0, 00:12:18.110 "state": "online", 00:12:18.110 "raid_level": "raid1", 00:12:18.110 "superblock": false, 00:12:18.110 "num_base_bdevs": 2, 00:12:18.110 "num_base_bdevs_discovered": 2, 00:12:18.110 "num_base_bdevs_operational": 2, 00:12:18.110 "process": { 00:12:18.110 "type": "rebuild", 00:12:18.110 "target": "spare", 00:12:18.110 "progress": { 00:12:18.110 "blocks": 14336, 00:12:18.110 "percent": 21 00:12:18.110 } 00:12:18.110 }, 00:12:18.110 "base_bdevs_list": [ 00:12:18.110 { 00:12:18.110 "name": "spare", 00:12:18.110 "uuid": "ee58bd27-ea6f-563d-a5f5-d1338508f751", 00:12:18.110 "is_configured": true, 00:12:18.110 "data_offset": 0, 00:12:18.110 "data_size": 65536 00:12:18.110 }, 00:12:18.110 { 00:12:18.110 "name": "BaseBdev2", 00:12:18.110 "uuid": "7f476b9d-89f7-5414-94ca-42a6b963be8f", 00:12:18.110 "is_configured": true, 00:12:18.110 "data_offset": 0, 00:12:18.110 "data_size": 65536 00:12:18.110 } 00:12:18.110 ] 00:12:18.110 }' 00:12:18.110 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.110 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:18.110 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.110 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:18.111 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:18.111 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:18.111 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:18.111 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:18.111 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=333 00:12:18.111 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:18.111 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:18.111 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.111 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:18.111 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:18.111 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.111 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.111 04:10:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.111 04:10:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.111 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.111 04:10:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.111 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.111 "name": "raid_bdev1", 00:12:18.111 "uuid": "57e66a71-0dc2-45c3-9b71-155e23a15240", 00:12:18.111 "strip_size_kb": 0, 00:12:18.111 "state": "online", 00:12:18.111 "raid_level": "raid1", 00:12:18.111 "superblock": false, 00:12:18.111 "num_base_bdevs": 2, 00:12:18.111 "num_base_bdevs_discovered": 2, 00:12:18.111 "num_base_bdevs_operational": 2, 00:12:18.111 "process": { 00:12:18.111 "type": "rebuild", 00:12:18.111 "target": "spare", 00:12:18.111 "progress": { 00:12:18.111 "blocks": 16384, 00:12:18.111 "percent": 25 00:12:18.111 } 00:12:18.111 }, 00:12:18.111 "base_bdevs_list": [ 00:12:18.111 { 00:12:18.111 "name": "spare", 00:12:18.111 "uuid": "ee58bd27-ea6f-563d-a5f5-d1338508f751", 00:12:18.111 "is_configured": true, 00:12:18.111 "data_offset": 0, 00:12:18.111 "data_size": 65536 00:12:18.111 }, 00:12:18.111 { 00:12:18.111 "name": "BaseBdev2", 00:12:18.111 "uuid": "7f476b9d-89f7-5414-94ca-42a6b963be8f", 00:12:18.111 "is_configured": true, 00:12:18.111 "data_offset": 0, 00:12:18.111 "data_size": 65536 00:12:18.111 } 00:12:18.111 ] 00:12:18.111 }' 00:12:18.111 04:10:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.111 04:10:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:18.111 04:10:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.111 04:10:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:18.111 04:10:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:18.371 [2024-11-21 04:10:18.109645] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:18.631 160.00 IOPS, 480.00 MiB/s [2024-11-21T04:10:18.604Z] [2024-11-21 04:10:18.550196] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:18.892 [2024-11-21 04:10:18.659205] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:18.892 [2024-11-21 04:10:18.659654] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:19.152 [2024-11-21 04:10:18.901095] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:19.152 [2024-11-21 04:10:18.901566] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:19.152 04:10:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:19.152 04:10:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:19.152 04:10:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.152 04:10:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:19.152 04:10:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:19.152 04:10:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.152 04:10:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.152 04:10:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.152 04:10:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.152 04:10:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.152 04:10:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.152 04:10:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.152 "name": "raid_bdev1", 00:12:19.152 "uuid": "57e66a71-0dc2-45c3-9b71-155e23a15240", 00:12:19.152 "strip_size_kb": 0, 00:12:19.152 "state": "online", 00:12:19.152 "raid_level": "raid1", 00:12:19.152 "superblock": false, 00:12:19.152 "num_base_bdevs": 2, 00:12:19.153 "num_base_bdevs_discovered": 2, 00:12:19.153 "num_base_bdevs_operational": 2, 00:12:19.153 "process": { 00:12:19.153 "type": "rebuild", 00:12:19.153 "target": "spare", 00:12:19.153 "progress": { 00:12:19.153 "blocks": 34816, 00:12:19.153 "percent": 53 00:12:19.153 } 00:12:19.153 }, 00:12:19.153 "base_bdevs_list": [ 00:12:19.153 { 00:12:19.153 "name": "spare", 00:12:19.153 "uuid": "ee58bd27-ea6f-563d-a5f5-d1338508f751", 00:12:19.153 "is_configured": true, 00:12:19.153 "data_offset": 0, 00:12:19.153 "data_size": 65536 00:12:19.153 }, 00:12:19.153 { 00:12:19.153 "name": "BaseBdev2", 00:12:19.153 "uuid": "7f476b9d-89f7-5414-94ca-42a6b963be8f", 00:12:19.153 "is_configured": true, 00:12:19.153 "data_offset": 0, 00:12:19.153 "data_size": 65536 00:12:19.153 } 00:12:19.153 ] 00:12:19.153 }' 00:12:19.153 04:10:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.411 04:10:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:19.411 04:10:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.411 04:10:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:19.411 04:10:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:19.980 136.80 IOPS, 410.40 MiB/s [2024-11-21T04:10:19.953Z] [2024-11-21 04:10:19.913182] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:20.243 [2024-11-21 04:10:20.133301] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:20.243 04:10:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:20.243 04:10:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:20.243 04:10:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:20.243 04:10:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:20.243 04:10:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:20.243 04:10:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:20.243 04:10:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.243 04:10:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.243 04:10:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.243 04:10:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.503 04:10:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.503 04:10:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:20.503 "name": "raid_bdev1", 00:12:20.503 "uuid": "57e66a71-0dc2-45c3-9b71-155e23a15240", 00:12:20.503 "strip_size_kb": 0, 00:12:20.503 "state": "online", 00:12:20.503 "raid_level": "raid1", 00:12:20.503 "superblock": false, 00:12:20.503 "num_base_bdevs": 2, 00:12:20.503 "num_base_bdevs_discovered": 2, 00:12:20.503 "num_base_bdevs_operational": 2, 00:12:20.503 "process": { 00:12:20.503 "type": "rebuild", 00:12:20.503 "target": "spare", 00:12:20.503 "progress": { 00:12:20.503 "blocks": 53248, 00:12:20.503 "percent": 81 00:12:20.503 } 00:12:20.503 }, 00:12:20.503 "base_bdevs_list": [ 00:12:20.503 { 00:12:20.503 "name": "spare", 00:12:20.503 "uuid": "ee58bd27-ea6f-563d-a5f5-d1338508f751", 00:12:20.503 "is_configured": true, 00:12:20.503 "data_offset": 0, 00:12:20.503 "data_size": 65536 00:12:20.503 }, 00:12:20.503 { 00:12:20.503 "name": "BaseBdev2", 00:12:20.503 "uuid": "7f476b9d-89f7-5414-94ca-42a6b963be8f", 00:12:20.503 "is_configured": true, 00:12:20.503 "data_offset": 0, 00:12:20.503 "data_size": 65536 00:12:20.503 } 00:12:20.503 ] 00:12:20.503 }' 00:12:20.503 04:10:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:20.503 04:10:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:20.503 04:10:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:20.503 118.83 IOPS, 356.50 MiB/s [2024-11-21T04:10:20.476Z] 04:10:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:20.503 04:10:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:21.073 [2024-11-21 04:10:20.895155] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:21.073 [2024-11-21 04:10:20.994920] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:21.073 [2024-11-21 04:10:20.997695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.644 108.57 IOPS, 325.71 MiB/s [2024-11-21T04:10:21.617Z] 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.644 "name": "raid_bdev1", 00:12:21.644 "uuid": "57e66a71-0dc2-45c3-9b71-155e23a15240", 00:12:21.644 "strip_size_kb": 0, 00:12:21.644 "state": "online", 00:12:21.644 "raid_level": "raid1", 00:12:21.644 "superblock": false, 00:12:21.644 "num_base_bdevs": 2, 00:12:21.644 "num_base_bdevs_discovered": 2, 00:12:21.644 "num_base_bdevs_operational": 2, 00:12:21.644 "base_bdevs_list": [ 00:12:21.644 { 00:12:21.644 "name": "spare", 00:12:21.644 "uuid": "ee58bd27-ea6f-563d-a5f5-d1338508f751", 00:12:21.644 "is_configured": true, 00:12:21.644 "data_offset": 0, 00:12:21.644 "data_size": 65536 00:12:21.644 }, 00:12:21.644 { 00:12:21.644 "name": "BaseBdev2", 00:12:21.644 "uuid": "7f476b9d-89f7-5414-94ca-42a6b963be8f", 00:12:21.644 "is_configured": true, 00:12:21.644 "data_offset": 0, 00:12:21.644 "data_size": 65536 00:12:21.644 } 00:12:21.644 ] 00:12:21.644 }' 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.644 "name": "raid_bdev1", 00:12:21.644 "uuid": "57e66a71-0dc2-45c3-9b71-155e23a15240", 00:12:21.644 "strip_size_kb": 0, 00:12:21.644 "state": "online", 00:12:21.644 "raid_level": "raid1", 00:12:21.644 "superblock": false, 00:12:21.644 "num_base_bdevs": 2, 00:12:21.644 "num_base_bdevs_discovered": 2, 00:12:21.644 "num_base_bdevs_operational": 2, 00:12:21.644 "base_bdevs_list": [ 00:12:21.644 { 00:12:21.644 "name": "spare", 00:12:21.644 "uuid": "ee58bd27-ea6f-563d-a5f5-d1338508f751", 00:12:21.644 "is_configured": true, 00:12:21.644 "data_offset": 0, 00:12:21.644 "data_size": 65536 00:12:21.644 }, 00:12:21.644 { 00:12:21.644 "name": "BaseBdev2", 00:12:21.644 "uuid": "7f476b9d-89f7-5414-94ca-42a6b963be8f", 00:12:21.644 "is_configured": true, 00:12:21.644 "data_offset": 0, 00:12:21.644 "data_size": 65536 00:12:21.644 } 00:12:21.644 ] 00:12:21.644 }' 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.644 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:21.645 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.905 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:21.905 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:21.905 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.905 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.905 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.905 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.905 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:21.905 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.905 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.905 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.905 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.905 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.905 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.905 04:10:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.905 04:10:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.905 04:10:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.905 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.905 "name": "raid_bdev1", 00:12:21.905 "uuid": "57e66a71-0dc2-45c3-9b71-155e23a15240", 00:12:21.905 "strip_size_kb": 0, 00:12:21.905 "state": "online", 00:12:21.905 "raid_level": "raid1", 00:12:21.905 "superblock": false, 00:12:21.905 "num_base_bdevs": 2, 00:12:21.905 "num_base_bdevs_discovered": 2, 00:12:21.905 "num_base_bdevs_operational": 2, 00:12:21.905 "base_bdevs_list": [ 00:12:21.905 { 00:12:21.905 "name": "spare", 00:12:21.905 "uuid": "ee58bd27-ea6f-563d-a5f5-d1338508f751", 00:12:21.905 "is_configured": true, 00:12:21.905 "data_offset": 0, 00:12:21.905 "data_size": 65536 00:12:21.905 }, 00:12:21.905 { 00:12:21.905 "name": "BaseBdev2", 00:12:21.905 "uuid": "7f476b9d-89f7-5414-94ca-42a6b963be8f", 00:12:21.905 "is_configured": true, 00:12:21.905 "data_offset": 0, 00:12:21.905 "data_size": 65536 00:12:21.905 } 00:12:21.905 ] 00:12:21.905 }' 00:12:21.905 04:10:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.905 04:10:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.165 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:22.165 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.165 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.165 [2024-11-21 04:10:22.039417] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:22.165 [2024-11-21 04:10:22.039467] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:22.165 00:12:22.165 Latency(us) 00:12:22.165 [2024-11-21T04:10:22.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.165 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:22.165 raid_bdev1 : 7.80 101.58 304.74 0.00 0.00 13843.64 277.24 114931.26 00:12:22.165 [2024-11-21T04:10:22.138Z] =================================================================================================================== 00:12:22.165 [2024-11-21T04:10:22.138Z] Total : 101.58 304.74 0.00 0.00 13843.64 277.24 114931.26 00:12:22.165 { 00:12:22.165 "results": [ 00:12:22.165 { 00:12:22.165 "job": "raid_bdev1", 00:12:22.165 "core_mask": "0x1", 00:12:22.165 "workload": "randrw", 00:12:22.165 "percentage": 50, 00:12:22.165 "status": "finished", 00:12:22.165 "queue_depth": 2, 00:12:22.165 "io_size": 3145728, 00:12:22.165 "runtime": 7.796755, 00:12:22.165 "iops": 101.58072172333233, 00:12:22.165 "mibps": 304.742165169997, 00:12:22.165 "io_failed": 0, 00:12:22.165 "io_timeout": 0, 00:12:22.165 "avg_latency_us": 13843.63671651008, 00:12:22.165 "min_latency_us": 277.2401746724891, 00:12:22.165 "max_latency_us": 114931.2558951965 00:12:22.165 } 00:12:22.165 ], 00:12:22.165 "core_count": 1 00:12:22.165 } 00:12:22.165 [2024-11-21 04:10:22.099306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.165 [2024-11-21 04:10:22.099363] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:22.165 [2024-11-21 04:10:22.099458] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:22.165 [2024-11-21 04:10:22.099473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:22.165 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.165 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.165 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:22.165 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.165 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.165 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.426 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:22.426 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:22.426 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:22.426 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:22.426 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:22.426 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:22.426 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:22.426 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:22.426 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:22.426 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:22.426 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:22.426 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:22.426 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:22.426 /dev/nbd0 00:12:22.426 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:22.426 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:22.426 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:22.426 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:22.426 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:22.426 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:22.426 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.684 1+0 records in 00:12:22.684 1+0 records out 00:12:22.684 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503531 s, 8.1 MB/s 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:22.684 /dev/nbd1 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:22.684 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.944 1+0 records in 00:12:22.944 1+0 records out 00:12:22.944 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517416 s, 7.9 MB/s 00:12:22.944 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.944 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:22.944 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.944 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:22.944 04:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:22.944 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:22.944 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:22.944 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:22.944 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:22.944 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:22.944 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:22.944 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:22.944 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:22.944 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.944 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:23.203 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:23.203 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:23.203 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:23.203 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.203 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.203 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:23.203 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:23.203 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.203 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:23.203 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:23.203 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:23.203 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:23.203 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:23.203 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.203 04:10:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:23.203 04:10:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:23.463 04:10:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:23.463 04:10:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:23.463 04:10:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.463 04:10:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.463 04:10:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:23.463 04:10:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:23.463 04:10:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.463 04:10:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:23.463 04:10:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 87184 00:12:23.463 04:10:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 87184 ']' 00:12:23.463 04:10:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 87184 00:12:23.463 04:10:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:12:23.463 04:10:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:23.463 04:10:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87184 00:12:23.463 04:10:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:23.463 04:10:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:23.463 04:10:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87184' 00:12:23.463 killing process with pid 87184 00:12:23.463 04:10:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 87184 00:12:23.463 Received shutdown signal, test time was about 8.937338 seconds 00:12:23.463 00:12:23.463 Latency(us) 00:12:23.463 [2024-11-21T04:10:23.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:23.463 [2024-11-21T04:10:23.436Z] =================================================================================================================== 00:12:23.463 [2024-11-21T04:10:23.437Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:23.464 [2024-11-21 04:10:23.233632] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:23.464 04:10:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 87184 00:12:23.464 [2024-11-21 04:10:23.281116] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:23.724 00:12:23.724 real 0m10.932s 00:12:23.724 user 0m13.809s 00:12:23.724 sys 0m1.594s 00:12:23.724 ************************************ 00:12:23.724 END TEST raid_rebuild_test_io 00:12:23.724 ************************************ 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.724 04:10:23 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:23.724 04:10:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:23.724 04:10:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.724 04:10:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:23.724 ************************************ 00:12:23.724 START TEST raid_rebuild_test_sb_io 00:12:23.724 ************************************ 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87549 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87549 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 87549 ']' 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.724 04:10:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.984 [2024-11-21 04:10:23.780030] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:12:23.984 [2024-11-21 04:10:23.780292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87549 ] 00:12:23.984 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:23.984 Zero copy mechanism will not be used. 00:12:23.984 [2024-11-21 04:10:23.937057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.244 [2024-11-21 04:10:23.976866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.244 [2024-11-21 04:10:24.054069] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:24.244 [2024-11-21 04:10:24.054189] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.814 BaseBdev1_malloc 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.814 [2024-11-21 04:10:24.648533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:24.814 [2024-11-21 04:10:24.648621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.814 [2024-11-21 04:10:24.648660] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:24.814 [2024-11-21 04:10:24.648677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.814 [2024-11-21 04:10:24.651194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.814 [2024-11-21 04:10:24.651241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:24.814 BaseBdev1 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.814 BaseBdev2_malloc 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.814 [2024-11-21 04:10:24.683153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:24.814 [2024-11-21 04:10:24.683203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.814 [2024-11-21 04:10:24.683241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:24.814 [2024-11-21 04:10:24.683251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.814 [2024-11-21 04:10:24.685681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.814 [2024-11-21 04:10:24.685723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:24.814 BaseBdev2 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.814 spare_malloc 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.814 spare_delay 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.814 [2024-11-21 04:10:24.729777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:24.814 [2024-11-21 04:10:24.729831] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.814 [2024-11-21 04:10:24.729853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:24.814 [2024-11-21 04:10:24.729861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.814 [2024-11-21 04:10:24.732319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.814 [2024-11-21 04:10:24.732425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:24.814 spare 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.814 [2024-11-21 04:10:24.741805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:24.814 [2024-11-21 04:10:24.743951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:24.814 [2024-11-21 04:10:24.744173] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:24.814 [2024-11-21 04:10:24.744190] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:24.814 [2024-11-21 04:10:24.744494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:12:24.814 [2024-11-21 04:10:24.744652] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:24.814 [2024-11-21 04:10:24.744666] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:24.814 [2024-11-21 04:10:24.744781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:24.814 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.815 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.815 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.815 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.815 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:24.815 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.815 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.815 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.815 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.815 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.815 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.815 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.815 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.815 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.074 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.074 "name": "raid_bdev1", 00:12:25.074 "uuid": "c0eabde8-5ce7-48dd-9e90-fb6f6599afdd", 00:12:25.074 "strip_size_kb": 0, 00:12:25.074 "state": "online", 00:12:25.074 "raid_level": "raid1", 00:12:25.074 "superblock": true, 00:12:25.074 "num_base_bdevs": 2, 00:12:25.074 "num_base_bdevs_discovered": 2, 00:12:25.074 "num_base_bdevs_operational": 2, 00:12:25.074 "base_bdevs_list": [ 00:12:25.074 { 00:12:25.074 "name": "BaseBdev1", 00:12:25.074 "uuid": "d35281dc-dd26-513f-a93d-29bb61176343", 00:12:25.074 "is_configured": true, 00:12:25.074 "data_offset": 2048, 00:12:25.074 "data_size": 63488 00:12:25.074 }, 00:12:25.074 { 00:12:25.074 "name": "BaseBdev2", 00:12:25.074 "uuid": "44a28021-0ff3-5595-9b9b-64e02320bf80", 00:12:25.074 "is_configured": true, 00:12:25.074 "data_offset": 2048, 00:12:25.074 "data_size": 63488 00:12:25.074 } 00:12:25.074 ] 00:12:25.074 }' 00:12:25.074 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.074 04:10:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.333 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:25.333 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:25.333 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.333 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.333 [2024-11-21 04:10:25.193348] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.333 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.333 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:25.333 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:25.333 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.333 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.333 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.333 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.333 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:25.333 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:25.333 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:25.333 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:25.333 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.333 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.333 [2024-11-21 04:10:25.264913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:25.333 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.333 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:25.334 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.334 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.334 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.334 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.334 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:25.334 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.334 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.334 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.334 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.334 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.334 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.334 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.334 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.334 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.593 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.593 "name": "raid_bdev1", 00:12:25.593 "uuid": "c0eabde8-5ce7-48dd-9e90-fb6f6599afdd", 00:12:25.593 "strip_size_kb": 0, 00:12:25.593 "state": "online", 00:12:25.593 "raid_level": "raid1", 00:12:25.593 "superblock": true, 00:12:25.593 "num_base_bdevs": 2, 00:12:25.593 "num_base_bdevs_discovered": 1, 00:12:25.593 "num_base_bdevs_operational": 1, 00:12:25.593 "base_bdevs_list": [ 00:12:25.593 { 00:12:25.593 "name": null, 00:12:25.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.593 "is_configured": false, 00:12:25.593 "data_offset": 0, 00:12:25.593 "data_size": 63488 00:12:25.593 }, 00:12:25.593 { 00:12:25.593 "name": "BaseBdev2", 00:12:25.593 "uuid": "44a28021-0ff3-5595-9b9b-64e02320bf80", 00:12:25.593 "is_configured": true, 00:12:25.593 "data_offset": 2048, 00:12:25.593 "data_size": 63488 00:12:25.593 } 00:12:25.593 ] 00:12:25.593 }' 00:12:25.593 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.593 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.593 [2024-11-21 04:10:25.368180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:12:25.593 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:25.593 Zero copy mechanism will not be used. 00:12:25.593 Running I/O for 60 seconds... 00:12:25.853 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:25.853 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.853 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.853 [2024-11-21 04:10:25.753995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:25.853 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.853 04:10:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:25.853 [2024-11-21 04:10:25.821107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:12:25.853 [2024-11-21 04:10:25.823896] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:26.113 [2024-11-21 04:10:25.949649] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:26.113 [2024-11-21 04:10:25.950665] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:26.113 [2024-11-21 04:10:26.074057] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:26.113 [2024-11-21 04:10:26.074503] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:26.373 [2024-11-21 04:10:26.303254] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:26.632 217.00 IOPS, 651.00 MiB/s [2024-11-21T04:10:26.605Z] [2024-11-21 04:10:26.425382] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:26.892 [2024-11-21 04:10:26.742832] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:26.892 04:10:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:26.892 04:10:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.892 04:10:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:26.892 04:10:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:26.892 04:10:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.892 04:10:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.892 04:10:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.892 04:10:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.892 04:10:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.892 04:10:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.892 04:10:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.892 "name": "raid_bdev1", 00:12:26.892 "uuid": "c0eabde8-5ce7-48dd-9e90-fb6f6599afdd", 00:12:26.892 "strip_size_kb": 0, 00:12:26.892 "state": "online", 00:12:26.892 "raid_level": "raid1", 00:12:26.892 "superblock": true, 00:12:26.892 "num_base_bdevs": 2, 00:12:26.892 "num_base_bdevs_discovered": 2, 00:12:26.892 "num_base_bdevs_operational": 2, 00:12:26.892 "process": { 00:12:26.892 "type": "rebuild", 00:12:26.892 "target": "spare", 00:12:26.892 "progress": { 00:12:26.892 "blocks": 14336, 00:12:26.892 "percent": 22 00:12:26.892 } 00:12:26.892 }, 00:12:26.892 "base_bdevs_list": [ 00:12:26.892 { 00:12:26.892 "name": "spare", 00:12:26.892 "uuid": "688423b1-ccd4-501d-961b-fff45abcc314", 00:12:26.892 "is_configured": true, 00:12:26.892 "data_offset": 2048, 00:12:26.892 "data_size": 63488 00:12:26.892 }, 00:12:26.892 { 00:12:26.892 "name": "BaseBdev2", 00:12:26.892 "uuid": "44a28021-0ff3-5595-9b9b-64e02320bf80", 00:12:26.892 "is_configured": true, 00:12:26.892 "data_offset": 2048, 00:12:26.892 "data_size": 63488 00:12:26.892 } 00:12:26.892 ] 00:12:26.892 }' 00:12:26.892 04:10:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.152 04:10:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:27.152 04:10:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.152 04:10:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:27.152 04:10:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:27.152 04:10:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.152 04:10:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.152 [2024-11-21 04:10:26.957383] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:27.152 [2024-11-21 04:10:26.976430] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:27.152 [2024-11-21 04:10:26.984778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.152 [2024-11-21 04:10:26.984815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:27.152 [2024-11-21 04:10:26.984832] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:27.152 [2024-11-21 04:10:27.006117] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:12:27.152 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.152 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:27.152 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.152 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.152 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.152 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.152 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:27.152 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.152 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.152 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.152 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.152 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.152 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.152 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.152 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.152 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.152 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.152 "name": "raid_bdev1", 00:12:27.152 "uuid": "c0eabde8-5ce7-48dd-9e90-fb6f6599afdd", 00:12:27.152 "strip_size_kb": 0, 00:12:27.152 "state": "online", 00:12:27.152 "raid_level": "raid1", 00:12:27.152 "superblock": true, 00:12:27.152 "num_base_bdevs": 2, 00:12:27.152 "num_base_bdevs_discovered": 1, 00:12:27.152 "num_base_bdevs_operational": 1, 00:12:27.152 "base_bdevs_list": [ 00:12:27.152 { 00:12:27.152 "name": null, 00:12:27.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.152 "is_configured": false, 00:12:27.152 "data_offset": 0, 00:12:27.152 "data_size": 63488 00:12:27.152 }, 00:12:27.152 { 00:12:27.152 "name": "BaseBdev2", 00:12:27.152 "uuid": "44a28021-0ff3-5595-9b9b-64e02320bf80", 00:12:27.152 "is_configured": true, 00:12:27.152 "data_offset": 2048, 00:12:27.152 "data_size": 63488 00:12:27.152 } 00:12:27.152 ] 00:12:27.152 }' 00:12:27.152 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.152 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.671 191.50 IOPS, 574.50 MiB/s [2024-11-21T04:10:27.645Z] 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:27.672 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.672 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:27.672 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:27.672 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.672 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.672 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.672 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.672 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.672 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.672 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.672 "name": "raid_bdev1", 00:12:27.672 "uuid": "c0eabde8-5ce7-48dd-9e90-fb6f6599afdd", 00:12:27.672 "strip_size_kb": 0, 00:12:27.672 "state": "online", 00:12:27.672 "raid_level": "raid1", 00:12:27.672 "superblock": true, 00:12:27.672 "num_base_bdevs": 2, 00:12:27.672 "num_base_bdevs_discovered": 1, 00:12:27.672 "num_base_bdevs_operational": 1, 00:12:27.672 "base_bdevs_list": [ 00:12:27.672 { 00:12:27.672 "name": null, 00:12:27.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.672 "is_configured": false, 00:12:27.672 "data_offset": 0, 00:12:27.672 "data_size": 63488 00:12:27.672 }, 00:12:27.672 { 00:12:27.672 "name": "BaseBdev2", 00:12:27.672 "uuid": "44a28021-0ff3-5595-9b9b-64e02320bf80", 00:12:27.672 "is_configured": true, 00:12:27.672 "data_offset": 2048, 00:12:27.672 "data_size": 63488 00:12:27.672 } 00:12:27.672 ] 00:12:27.672 }' 00:12:27.672 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.672 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:27.672 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.672 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:27.672 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:27.672 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.672 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.672 [2024-11-21 04:10:27.564487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:27.672 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.672 04:10:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:27.672 [2024-11-21 04:10:27.628308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:27.672 [2024-11-21 04:10:27.630662] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:27.931 [2024-11-21 04:10:27.743451] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:27.932 [2024-11-21 04:10:27.743905] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:27.932 [2024-11-21 04:10:27.845825] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:27.932 [2024-11-21 04:10:27.846101] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:28.600 [2024-11-21 04:10:28.199968] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:28.600 170.67 IOPS, 512.00 MiB/s [2024-11-21T04:10:28.573Z] [2024-11-21 04:10:28.546798] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.870 "name": "raid_bdev1", 00:12:28.870 "uuid": "c0eabde8-5ce7-48dd-9e90-fb6f6599afdd", 00:12:28.870 "strip_size_kb": 0, 00:12:28.870 "state": "online", 00:12:28.870 "raid_level": "raid1", 00:12:28.870 "superblock": true, 00:12:28.870 "num_base_bdevs": 2, 00:12:28.870 "num_base_bdevs_discovered": 2, 00:12:28.870 "num_base_bdevs_operational": 2, 00:12:28.870 "process": { 00:12:28.870 "type": "rebuild", 00:12:28.870 "target": "spare", 00:12:28.870 "progress": { 00:12:28.870 "blocks": 16384, 00:12:28.870 "percent": 25 00:12:28.870 } 00:12:28.870 }, 00:12:28.870 "base_bdevs_list": [ 00:12:28.870 { 00:12:28.870 "name": "spare", 00:12:28.870 "uuid": "688423b1-ccd4-501d-961b-fff45abcc314", 00:12:28.870 "is_configured": true, 00:12:28.870 "data_offset": 2048, 00:12:28.870 "data_size": 63488 00:12:28.870 }, 00:12:28.870 { 00:12:28.870 "name": "BaseBdev2", 00:12:28.870 "uuid": "44a28021-0ff3-5595-9b9b-64e02320bf80", 00:12:28.870 "is_configured": true, 00:12:28.870 "data_offset": 2048, 00:12:28.870 "data_size": 63488 00:12:28.870 } 00:12:28.870 ] 00:12:28.870 }' 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:28.870 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=344 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.870 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.870 "name": "raid_bdev1", 00:12:28.870 "uuid": "c0eabde8-5ce7-48dd-9e90-fb6f6599afdd", 00:12:28.870 "strip_size_kb": 0, 00:12:28.870 "state": "online", 00:12:28.870 "raid_level": "raid1", 00:12:28.870 "superblock": true, 00:12:28.870 "num_base_bdevs": 2, 00:12:28.870 "num_base_bdevs_discovered": 2, 00:12:28.870 "num_base_bdevs_operational": 2, 00:12:28.870 "process": { 00:12:28.870 "type": "rebuild", 00:12:28.870 "target": "spare", 00:12:28.870 "progress": { 00:12:28.870 "blocks": 16384, 00:12:28.870 "percent": 25 00:12:28.870 } 00:12:28.870 }, 00:12:28.870 "base_bdevs_list": [ 00:12:28.870 { 00:12:28.870 "name": "spare", 00:12:28.870 "uuid": "688423b1-ccd4-501d-961b-fff45abcc314", 00:12:28.870 "is_configured": true, 00:12:28.870 "data_offset": 2048, 00:12:28.870 "data_size": 63488 00:12:28.870 }, 00:12:28.870 { 00:12:28.870 "name": "BaseBdev2", 00:12:28.870 "uuid": "44a28021-0ff3-5595-9b9b-64e02320bf80", 00:12:28.871 "is_configured": true, 00:12:28.871 "data_offset": 2048, 00:12:28.871 "data_size": 63488 00:12:28.871 } 00:12:28.871 ] 00:12:28.871 }' 00:12:28.871 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.130 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:29.131 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.131 [2024-11-21 04:10:28.896799] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:29.131 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:29.131 04:10:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:29.131 [2024-11-21 04:10:29.012715] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:29.131 [2024-11-21 04:10:29.012968] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:29.388 [2024-11-21 04:10:29.323101] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:29.647 144.75 IOPS, 434.25 MiB/s [2024-11-21T04:10:29.620Z] [2024-11-21 04:10:29.540382] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:30.216 [2024-11-21 04:10:29.887587] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:30.216 04:10:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:30.216 04:10:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:30.216 04:10:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.216 04:10:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:30.216 04:10:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:30.216 04:10:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.216 04:10:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.216 04:10:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.216 04:10:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.216 04:10:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.216 04:10:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.216 04:10:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.216 "name": "raid_bdev1", 00:12:30.216 "uuid": "c0eabde8-5ce7-48dd-9e90-fb6f6599afdd", 00:12:30.216 "strip_size_kb": 0, 00:12:30.216 "state": "online", 00:12:30.216 "raid_level": "raid1", 00:12:30.216 "superblock": true, 00:12:30.216 "num_base_bdevs": 2, 00:12:30.216 "num_base_bdevs_discovered": 2, 00:12:30.216 "num_base_bdevs_operational": 2, 00:12:30.216 "process": { 00:12:30.216 "type": "rebuild", 00:12:30.216 "target": "spare", 00:12:30.216 "progress": { 00:12:30.216 "blocks": 32768, 00:12:30.216 "percent": 51 00:12:30.216 } 00:12:30.216 }, 00:12:30.216 "base_bdevs_list": [ 00:12:30.216 { 00:12:30.216 "name": "spare", 00:12:30.216 "uuid": "688423b1-ccd4-501d-961b-fff45abcc314", 00:12:30.216 "is_configured": true, 00:12:30.216 "data_offset": 2048, 00:12:30.216 "data_size": 63488 00:12:30.216 }, 00:12:30.216 { 00:12:30.216 "name": "BaseBdev2", 00:12:30.216 "uuid": "44a28021-0ff3-5595-9b9b-64e02320bf80", 00:12:30.216 "is_configured": true, 00:12:30.216 "data_offset": 2048, 00:12:30.216 "data_size": 63488 00:12:30.216 } 00:12:30.216 ] 00:12:30.216 }' 00:12:30.216 04:10:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.216 [2024-11-21 04:10:30.020048] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:30.216 04:10:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:30.216 04:10:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.216 04:10:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:30.216 04:10:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:30.476 [2024-11-21 04:10:30.331054] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:30.476 [2024-11-21 04:10:30.331946] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:30.736 125.00 IOPS, 375.00 MiB/s [2024-11-21T04:10:30.709Z] [2024-11-21 04:10:30.549320] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:30.997 [2024-11-21 04:10:30.871053] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:30.997 [2024-11-21 04:10:30.871948] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:31.257 04:10:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:31.257 04:10:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:31.257 [2024-11-21 04:10:31.079208] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:31.257 04:10:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.257 04:10:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:31.257 04:10:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:31.257 04:10:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.257 04:10:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.257 04:10:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.257 04:10:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.257 04:10:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.257 04:10:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.257 04:10:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.257 "name": "raid_bdev1", 00:12:31.257 "uuid": "c0eabde8-5ce7-48dd-9e90-fb6f6599afdd", 00:12:31.257 "strip_size_kb": 0, 00:12:31.257 "state": "online", 00:12:31.257 "raid_level": "raid1", 00:12:31.257 "superblock": true, 00:12:31.257 "num_base_bdevs": 2, 00:12:31.257 "num_base_bdevs_discovered": 2, 00:12:31.257 "num_base_bdevs_operational": 2, 00:12:31.257 "process": { 00:12:31.257 "type": "rebuild", 00:12:31.257 "target": "spare", 00:12:31.257 "progress": { 00:12:31.257 "blocks": 47104, 00:12:31.257 "percent": 74 00:12:31.257 } 00:12:31.257 }, 00:12:31.257 "base_bdevs_list": [ 00:12:31.257 { 00:12:31.258 "name": "spare", 00:12:31.258 "uuid": "688423b1-ccd4-501d-961b-fff45abcc314", 00:12:31.258 "is_configured": true, 00:12:31.258 "data_offset": 2048, 00:12:31.258 "data_size": 63488 00:12:31.258 }, 00:12:31.258 { 00:12:31.258 "name": "BaseBdev2", 00:12:31.258 "uuid": "44a28021-0ff3-5595-9b9b-64e02320bf80", 00:12:31.258 "is_configured": true, 00:12:31.258 "data_offset": 2048, 00:12:31.258 "data_size": 63488 00:12:31.258 } 00:12:31.258 ] 00:12:31.258 }' 00:12:31.258 04:10:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.258 04:10:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:31.258 04:10:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.258 04:10:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.258 04:10:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:31.778 109.83 IOPS, 329.50 MiB/s [2024-11-21T04:10:31.751Z] [2024-11-21 04:10:31.718131] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:32.346 [2024-11-21 04:10:32.058996] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:32.346 [2024-11-21 04:10:32.158894] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:32.346 [2024-11-21 04:10:32.160703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.346 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:32.346 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:32.346 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.346 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:32.346 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:32.346 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.346 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.346 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.346 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.346 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.346 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.346 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.346 "name": "raid_bdev1", 00:12:32.346 "uuid": "c0eabde8-5ce7-48dd-9e90-fb6f6599afdd", 00:12:32.346 "strip_size_kb": 0, 00:12:32.346 "state": "online", 00:12:32.346 "raid_level": "raid1", 00:12:32.346 "superblock": true, 00:12:32.346 "num_base_bdevs": 2, 00:12:32.346 "num_base_bdevs_discovered": 2, 00:12:32.346 "num_base_bdevs_operational": 2, 00:12:32.346 "base_bdevs_list": [ 00:12:32.346 { 00:12:32.346 "name": "spare", 00:12:32.346 "uuid": "688423b1-ccd4-501d-961b-fff45abcc314", 00:12:32.346 "is_configured": true, 00:12:32.346 "data_offset": 2048, 00:12:32.346 "data_size": 63488 00:12:32.346 }, 00:12:32.346 { 00:12:32.346 "name": "BaseBdev2", 00:12:32.346 "uuid": "44a28021-0ff3-5595-9b9b-64e02320bf80", 00:12:32.346 "is_configured": true, 00:12:32.346 "data_offset": 2048, 00:12:32.346 "data_size": 63488 00:12:32.346 } 00:12:32.346 ] 00:12:32.346 }' 00:12:32.346 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.605 99.43 IOPS, 298.29 MiB/s [2024-11-21T04:10:32.578Z] 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.605 "name": "raid_bdev1", 00:12:32.605 "uuid": "c0eabde8-5ce7-48dd-9e90-fb6f6599afdd", 00:12:32.605 "strip_size_kb": 0, 00:12:32.605 "state": "online", 00:12:32.605 "raid_level": "raid1", 00:12:32.605 "superblock": true, 00:12:32.605 "num_base_bdevs": 2, 00:12:32.605 "num_base_bdevs_discovered": 2, 00:12:32.605 "num_base_bdevs_operational": 2, 00:12:32.605 "base_bdevs_list": [ 00:12:32.605 { 00:12:32.605 "name": "spare", 00:12:32.605 "uuid": "688423b1-ccd4-501d-961b-fff45abcc314", 00:12:32.605 "is_configured": true, 00:12:32.605 "data_offset": 2048, 00:12:32.605 "data_size": 63488 00:12:32.605 }, 00:12:32.605 { 00:12:32.605 "name": "BaseBdev2", 00:12:32.605 "uuid": "44a28021-0ff3-5595-9b9b-64e02320bf80", 00:12:32.605 "is_configured": true, 00:12:32.605 "data_offset": 2048, 00:12:32.605 "data_size": 63488 00:12:32.605 } 00:12:32.605 ] 00:12:32.605 }' 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.605 "name": "raid_bdev1", 00:12:32.605 "uuid": "c0eabde8-5ce7-48dd-9e90-fb6f6599afdd", 00:12:32.605 "strip_size_kb": 0, 00:12:32.605 "state": "online", 00:12:32.605 "raid_level": "raid1", 00:12:32.605 "superblock": true, 00:12:32.605 "num_base_bdevs": 2, 00:12:32.605 "num_base_bdevs_discovered": 2, 00:12:32.605 "num_base_bdevs_operational": 2, 00:12:32.605 "base_bdevs_list": [ 00:12:32.605 { 00:12:32.605 "name": "spare", 00:12:32.605 "uuid": "688423b1-ccd4-501d-961b-fff45abcc314", 00:12:32.605 "is_configured": true, 00:12:32.605 "data_offset": 2048, 00:12:32.605 "data_size": 63488 00:12:32.605 }, 00:12:32.605 { 00:12:32.605 "name": "BaseBdev2", 00:12:32.605 "uuid": "44a28021-0ff3-5595-9b9b-64e02320bf80", 00:12:32.605 "is_configured": true, 00:12:32.605 "data_offset": 2048, 00:12:32.605 "data_size": 63488 00:12:32.605 } 00:12:32.605 ] 00:12:32.605 }' 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.605 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.174 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:33.174 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.174 04:10:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.174 [2024-11-21 04:10:32.952961] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:33.174 [2024-11-21 04:10:32.953092] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:33.174 00:12:33.174 Latency(us) 00:12:33.174 [2024-11-21T04:10:33.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.174 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:33.174 raid_bdev1 : 7.68 94.83 284.49 0.00 0.00 14013.64 273.66 113099.68 00:12:33.174 [2024-11-21T04:10:33.147Z] =================================================================================================================== 00:12:33.174 [2024-11-21T04:10:33.147Z] Total : 94.83 284.49 0.00 0.00 14013.64 273.66 113099.68 00:12:33.174 { 00:12:33.174 "results": [ 00:12:33.174 { 00:12:33.174 "job": "raid_bdev1", 00:12:33.174 "core_mask": "0x1", 00:12:33.174 "workload": "randrw", 00:12:33.174 "percentage": 50, 00:12:33.174 "status": "finished", 00:12:33.174 "queue_depth": 2, 00:12:33.174 "io_size": 3145728, 00:12:33.174 "runtime": 7.676856, 00:12:33.174 "iops": 94.83048789764976, 00:12:33.174 "mibps": 284.4914636929493, 00:12:33.174 "io_failed": 0, 00:12:33.174 "io_timeout": 0, 00:12:33.174 "avg_latency_us": 14013.64073132108, 00:12:33.174 "min_latency_us": 273.6628820960699, 00:12:33.174 "max_latency_us": 113099.68209606987 00:12:33.174 } 00:12:33.174 ], 00:12:33.174 "core_count": 1 00:12:33.174 } 00:12:33.174 [2024-11-21 04:10:33.036784] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.174 [2024-11-21 04:10:33.036835] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:33.174 [2024-11-21 04:10:33.036921] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:33.174 [2024-11-21 04:10:33.036940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:33.174 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.174 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.174 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.174 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.174 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:33.174 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.174 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:33.174 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:33.174 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:33.174 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:33.174 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:33.174 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:33.174 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:33.174 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:33.174 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:33.174 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:33.174 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:33.174 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:33.174 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:33.433 /dev/nbd0 00:12:33.433 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:33.433 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:33.433 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:33.433 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:33.433 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:33.433 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:33.433 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:33.433 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:33.433 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:33.434 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:33.434 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.434 1+0 records in 00:12:33.434 1+0 records out 00:12:33.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331762 s, 12.3 MB/s 00:12:33.434 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.434 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:33.434 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.434 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:33.434 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:33.434 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:33.434 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:33.434 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:33.434 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:33.434 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:33.434 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:33.434 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:33.434 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:33.434 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:33.434 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:33.434 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:33.434 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:33.434 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:33.434 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:33.693 /dev/nbd1 00:12:33.693 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:33.693 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:33.693 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:33.693 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:33.693 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:33.693 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:33.693 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:33.693 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:33.693 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:33.693 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:33.693 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.693 1+0 records in 00:12:33.693 1+0 records out 00:12:33.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000559311 s, 7.3 MB/s 00:12:33.693 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.693 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:33.693 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.693 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:33.693 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:33.693 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:33.693 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:33.693 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:33.953 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:33.953 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:33.953 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:33.953 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:33.953 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:33.953 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.953 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:33.953 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:33.953 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:33.953 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:33.953 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.953 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.953 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:33.953 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:33.953 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.953 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:33.953 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:33.953 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:33.953 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:33.953 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:33.953 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.953 04:10:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:34.212 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:34.212 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:34.212 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:34.212 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.212 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.212 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:34.212 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:34.212 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.212 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:34.212 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:34.212 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.212 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.212 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.212 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:34.212 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.212 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.212 [2024-11-21 04:10:34.145443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:34.212 [2024-11-21 04:10:34.145551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.212 [2024-11-21 04:10:34.145596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:34.212 [2024-11-21 04:10:34.145624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.212 [2024-11-21 04:10:34.148340] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.213 [2024-11-21 04:10:34.148416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:34.213 [2024-11-21 04:10:34.148544] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:34.213 [2024-11-21 04:10:34.148634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:34.213 [2024-11-21 04:10:34.148828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:34.213 spare 00:12:34.213 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.213 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:34.213 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.213 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.472 [2024-11-21 04:10:34.248776] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:12:34.472 [2024-11-21 04:10:34.248857] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:34.472 [2024-11-21 04:10:34.249228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027720 00:12:34.472 [2024-11-21 04:10:34.249446] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:12:34.472 [2024-11-21 04:10:34.249498] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:12:34.472 [2024-11-21 04:10:34.249736] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.472 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.472 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:34.472 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.472 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.472 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.472 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.472 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:34.472 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.472 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.472 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.472 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.472 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.472 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.472 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.472 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.472 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.472 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.472 "name": "raid_bdev1", 00:12:34.472 "uuid": "c0eabde8-5ce7-48dd-9e90-fb6f6599afdd", 00:12:34.472 "strip_size_kb": 0, 00:12:34.472 "state": "online", 00:12:34.472 "raid_level": "raid1", 00:12:34.472 "superblock": true, 00:12:34.472 "num_base_bdevs": 2, 00:12:34.472 "num_base_bdevs_discovered": 2, 00:12:34.472 "num_base_bdevs_operational": 2, 00:12:34.472 "base_bdevs_list": [ 00:12:34.472 { 00:12:34.472 "name": "spare", 00:12:34.472 "uuid": "688423b1-ccd4-501d-961b-fff45abcc314", 00:12:34.472 "is_configured": true, 00:12:34.472 "data_offset": 2048, 00:12:34.472 "data_size": 63488 00:12:34.472 }, 00:12:34.472 { 00:12:34.472 "name": "BaseBdev2", 00:12:34.472 "uuid": "44a28021-0ff3-5595-9b9b-64e02320bf80", 00:12:34.472 "is_configured": true, 00:12:34.472 "data_offset": 2048, 00:12:34.472 "data_size": 63488 00:12:34.472 } 00:12:34.472 ] 00:12:34.472 }' 00:12:34.472 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.472 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.039 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:35.039 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.039 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:35.039 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:35.039 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.039 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.039 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.039 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.039 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.039 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.039 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.039 "name": "raid_bdev1", 00:12:35.039 "uuid": "c0eabde8-5ce7-48dd-9e90-fb6f6599afdd", 00:12:35.039 "strip_size_kb": 0, 00:12:35.039 "state": "online", 00:12:35.039 "raid_level": "raid1", 00:12:35.039 "superblock": true, 00:12:35.039 "num_base_bdevs": 2, 00:12:35.039 "num_base_bdevs_discovered": 2, 00:12:35.039 "num_base_bdevs_operational": 2, 00:12:35.039 "base_bdevs_list": [ 00:12:35.039 { 00:12:35.039 "name": "spare", 00:12:35.039 "uuid": "688423b1-ccd4-501d-961b-fff45abcc314", 00:12:35.039 "is_configured": true, 00:12:35.039 "data_offset": 2048, 00:12:35.039 "data_size": 63488 00:12:35.039 }, 00:12:35.039 { 00:12:35.039 "name": "BaseBdev2", 00:12:35.039 "uuid": "44a28021-0ff3-5595-9b9b-64e02320bf80", 00:12:35.039 "is_configured": true, 00:12:35.039 "data_offset": 2048, 00:12:35.039 "data_size": 63488 00:12:35.039 } 00:12:35.039 ] 00:12:35.040 }' 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.040 [2024-11-21 04:10:34.932650] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.040 "name": "raid_bdev1", 00:12:35.040 "uuid": "c0eabde8-5ce7-48dd-9e90-fb6f6599afdd", 00:12:35.040 "strip_size_kb": 0, 00:12:35.040 "state": "online", 00:12:35.040 "raid_level": "raid1", 00:12:35.040 "superblock": true, 00:12:35.040 "num_base_bdevs": 2, 00:12:35.040 "num_base_bdevs_discovered": 1, 00:12:35.040 "num_base_bdevs_operational": 1, 00:12:35.040 "base_bdevs_list": [ 00:12:35.040 { 00:12:35.040 "name": null, 00:12:35.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.040 "is_configured": false, 00:12:35.040 "data_offset": 0, 00:12:35.040 "data_size": 63488 00:12:35.040 }, 00:12:35.040 { 00:12:35.040 "name": "BaseBdev2", 00:12:35.040 "uuid": "44a28021-0ff3-5595-9b9b-64e02320bf80", 00:12:35.040 "is_configured": true, 00:12:35.040 "data_offset": 2048, 00:12:35.040 "data_size": 63488 00:12:35.040 } 00:12:35.040 ] 00:12:35.040 }' 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.040 04:10:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.609 04:10:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:35.609 04:10:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.609 04:10:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.609 [2024-11-21 04:10:35.408277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:35.609 [2024-11-21 04:10:35.408596] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:35.609 [2024-11-21 04:10:35.408662] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:35.609 [2024-11-21 04:10:35.408746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:35.609 [2024-11-21 04:10:35.418212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000277f0 00:12:35.609 04:10:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.609 04:10:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:35.609 [2024-11-21 04:10:35.420506] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:36.549 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:36.549 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.549 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:36.549 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:36.549 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.549 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.549 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.549 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.549 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.549 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.549 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.549 "name": "raid_bdev1", 00:12:36.549 "uuid": "c0eabde8-5ce7-48dd-9e90-fb6f6599afdd", 00:12:36.549 "strip_size_kb": 0, 00:12:36.549 "state": "online", 00:12:36.549 "raid_level": "raid1", 00:12:36.549 "superblock": true, 00:12:36.549 "num_base_bdevs": 2, 00:12:36.549 "num_base_bdevs_discovered": 2, 00:12:36.549 "num_base_bdevs_operational": 2, 00:12:36.549 "process": { 00:12:36.549 "type": "rebuild", 00:12:36.549 "target": "spare", 00:12:36.549 "progress": { 00:12:36.549 "blocks": 20480, 00:12:36.549 "percent": 32 00:12:36.549 } 00:12:36.549 }, 00:12:36.549 "base_bdevs_list": [ 00:12:36.549 { 00:12:36.549 "name": "spare", 00:12:36.549 "uuid": "688423b1-ccd4-501d-961b-fff45abcc314", 00:12:36.549 "is_configured": true, 00:12:36.549 "data_offset": 2048, 00:12:36.549 "data_size": 63488 00:12:36.549 }, 00:12:36.549 { 00:12:36.549 "name": "BaseBdev2", 00:12:36.549 "uuid": "44a28021-0ff3-5595-9b9b-64e02320bf80", 00:12:36.549 "is_configured": true, 00:12:36.549 "data_offset": 2048, 00:12:36.549 "data_size": 63488 00:12:36.549 } 00:12:36.549 ] 00:12:36.549 }' 00:12:36.549 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.808 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:36.809 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.809 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:36.809 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:36.809 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.809 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.809 [2024-11-21 04:10:36.568738] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:36.809 [2024-11-21 04:10:36.628418] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:36.809 [2024-11-21 04:10:36.628551] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.809 [2024-11-21 04:10:36.628592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:36.809 [2024-11-21 04:10:36.628618] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:36.809 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.809 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:36.809 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.809 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.809 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.809 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.809 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:36.809 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.809 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.809 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.809 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.809 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.809 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.809 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.809 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.809 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.809 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.809 "name": "raid_bdev1", 00:12:36.809 "uuid": "c0eabde8-5ce7-48dd-9e90-fb6f6599afdd", 00:12:36.809 "strip_size_kb": 0, 00:12:36.809 "state": "online", 00:12:36.809 "raid_level": "raid1", 00:12:36.809 "superblock": true, 00:12:36.809 "num_base_bdevs": 2, 00:12:36.809 "num_base_bdevs_discovered": 1, 00:12:36.809 "num_base_bdevs_operational": 1, 00:12:36.809 "base_bdevs_list": [ 00:12:36.809 { 00:12:36.809 "name": null, 00:12:36.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.809 "is_configured": false, 00:12:36.809 "data_offset": 0, 00:12:36.809 "data_size": 63488 00:12:36.809 }, 00:12:36.809 { 00:12:36.809 "name": "BaseBdev2", 00:12:36.809 "uuid": "44a28021-0ff3-5595-9b9b-64e02320bf80", 00:12:36.809 "is_configured": true, 00:12:36.809 "data_offset": 2048, 00:12:36.809 "data_size": 63488 00:12:36.809 } 00:12:36.809 ] 00:12:36.809 }' 00:12:36.809 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.809 04:10:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.379 04:10:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:37.379 04:10:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.379 04:10:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.379 [2024-11-21 04:10:37.104880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:37.379 [2024-11-21 04:10:37.105023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.379 [2024-11-21 04:10:37.105072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:37.379 [2024-11-21 04:10:37.105125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.379 [2024-11-21 04:10:37.105676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.379 [2024-11-21 04:10:37.105751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:37.379 [2024-11-21 04:10:37.105904] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:37.379 [2024-11-21 04:10:37.105950] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:37.379 [2024-11-21 04:10:37.105997] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:37.379 [2024-11-21 04:10:37.106108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:37.379 [2024-11-21 04:10:37.115527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000278c0 00:12:37.379 spare 00:12:37.379 04:10:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.379 04:10:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:37.379 [2024-11-21 04:10:37.117825] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:38.319 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:38.319 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.319 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:38.319 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:38.319 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.319 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.319 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.319 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.319 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.319 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.319 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.319 "name": "raid_bdev1", 00:12:38.319 "uuid": "c0eabde8-5ce7-48dd-9e90-fb6f6599afdd", 00:12:38.320 "strip_size_kb": 0, 00:12:38.320 "state": "online", 00:12:38.320 "raid_level": "raid1", 00:12:38.320 "superblock": true, 00:12:38.320 "num_base_bdevs": 2, 00:12:38.320 "num_base_bdevs_discovered": 2, 00:12:38.320 "num_base_bdevs_operational": 2, 00:12:38.320 "process": { 00:12:38.320 "type": "rebuild", 00:12:38.320 "target": "spare", 00:12:38.320 "progress": { 00:12:38.320 "blocks": 20480, 00:12:38.320 "percent": 32 00:12:38.320 } 00:12:38.320 }, 00:12:38.320 "base_bdevs_list": [ 00:12:38.320 { 00:12:38.320 "name": "spare", 00:12:38.320 "uuid": "688423b1-ccd4-501d-961b-fff45abcc314", 00:12:38.320 "is_configured": true, 00:12:38.320 "data_offset": 2048, 00:12:38.320 "data_size": 63488 00:12:38.320 }, 00:12:38.320 { 00:12:38.320 "name": "BaseBdev2", 00:12:38.320 "uuid": "44a28021-0ff3-5595-9b9b-64e02320bf80", 00:12:38.320 "is_configured": true, 00:12:38.320 "data_offset": 2048, 00:12:38.320 "data_size": 63488 00:12:38.320 } 00:12:38.320 ] 00:12:38.320 }' 00:12:38.320 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.320 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:38.320 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.320 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:38.320 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:38.320 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.320 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.320 [2024-11-21 04:10:38.278154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:38.580 [2024-11-21 04:10:38.325864] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:38.580 [2024-11-21 04:10:38.325951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.580 [2024-11-21 04:10:38.325973] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:38.580 [2024-11-21 04:10:38.325982] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:38.580 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.580 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:38.580 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.580 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.580 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.580 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.580 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:38.580 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.580 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.580 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.580 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.580 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.580 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.580 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.580 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.580 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.580 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.580 "name": "raid_bdev1", 00:12:38.580 "uuid": "c0eabde8-5ce7-48dd-9e90-fb6f6599afdd", 00:12:38.580 "strip_size_kb": 0, 00:12:38.580 "state": "online", 00:12:38.580 "raid_level": "raid1", 00:12:38.580 "superblock": true, 00:12:38.580 "num_base_bdevs": 2, 00:12:38.580 "num_base_bdevs_discovered": 1, 00:12:38.580 "num_base_bdevs_operational": 1, 00:12:38.580 "base_bdevs_list": [ 00:12:38.580 { 00:12:38.580 "name": null, 00:12:38.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.580 "is_configured": false, 00:12:38.580 "data_offset": 0, 00:12:38.580 "data_size": 63488 00:12:38.580 }, 00:12:38.580 { 00:12:38.580 "name": "BaseBdev2", 00:12:38.580 "uuid": "44a28021-0ff3-5595-9b9b-64e02320bf80", 00:12:38.580 "is_configured": true, 00:12:38.580 "data_offset": 2048, 00:12:38.580 "data_size": 63488 00:12:38.580 } 00:12:38.580 ] 00:12:38.580 }' 00:12:38.580 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.580 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.866 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:38.866 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.866 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:38.866 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:38.866 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.866 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.866 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.866 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.866 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.866 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.125 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.125 "name": "raid_bdev1", 00:12:39.125 "uuid": "c0eabde8-5ce7-48dd-9e90-fb6f6599afdd", 00:12:39.125 "strip_size_kb": 0, 00:12:39.125 "state": "online", 00:12:39.125 "raid_level": "raid1", 00:12:39.125 "superblock": true, 00:12:39.125 "num_base_bdevs": 2, 00:12:39.125 "num_base_bdevs_discovered": 1, 00:12:39.125 "num_base_bdevs_operational": 1, 00:12:39.125 "base_bdevs_list": [ 00:12:39.125 { 00:12:39.125 "name": null, 00:12:39.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.125 "is_configured": false, 00:12:39.125 "data_offset": 0, 00:12:39.125 "data_size": 63488 00:12:39.125 }, 00:12:39.125 { 00:12:39.126 "name": "BaseBdev2", 00:12:39.126 "uuid": "44a28021-0ff3-5595-9b9b-64e02320bf80", 00:12:39.126 "is_configured": true, 00:12:39.126 "data_offset": 2048, 00:12:39.126 "data_size": 63488 00:12:39.126 } 00:12:39.126 ] 00:12:39.126 }' 00:12:39.126 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.126 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:39.126 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.126 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:39.126 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:39.126 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.126 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.126 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.126 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:39.126 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.126 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.126 [2024-11-21 04:10:38.978006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:39.126 [2024-11-21 04:10:38.978151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.126 [2024-11-21 04:10:38.978186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:39.126 [2024-11-21 04:10:38.978196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.126 [2024-11-21 04:10:38.978709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.126 [2024-11-21 04:10:38.978731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:39.126 [2024-11-21 04:10:38.978821] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:39.126 [2024-11-21 04:10:38.978842] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:39.126 [2024-11-21 04:10:38.978863] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:39.126 [2024-11-21 04:10:38.978875] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:39.126 BaseBdev1 00:12:39.126 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.126 04:10:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:40.066 04:10:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:40.066 04:10:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.066 04:10:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.066 04:10:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.066 04:10:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.066 04:10:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:40.066 04:10:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.066 04:10:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.066 04:10:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.066 04:10:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.066 04:10:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.066 04:10:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.066 04:10:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.066 04:10:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.066 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.326 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.326 "name": "raid_bdev1", 00:12:40.326 "uuid": "c0eabde8-5ce7-48dd-9e90-fb6f6599afdd", 00:12:40.326 "strip_size_kb": 0, 00:12:40.326 "state": "online", 00:12:40.326 "raid_level": "raid1", 00:12:40.326 "superblock": true, 00:12:40.326 "num_base_bdevs": 2, 00:12:40.326 "num_base_bdevs_discovered": 1, 00:12:40.326 "num_base_bdevs_operational": 1, 00:12:40.326 "base_bdevs_list": [ 00:12:40.326 { 00:12:40.326 "name": null, 00:12:40.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.326 "is_configured": false, 00:12:40.326 "data_offset": 0, 00:12:40.326 "data_size": 63488 00:12:40.326 }, 00:12:40.326 { 00:12:40.326 "name": "BaseBdev2", 00:12:40.326 "uuid": "44a28021-0ff3-5595-9b9b-64e02320bf80", 00:12:40.326 "is_configured": true, 00:12:40.326 "data_offset": 2048, 00:12:40.326 "data_size": 63488 00:12:40.326 } 00:12:40.326 ] 00:12:40.326 }' 00:12:40.326 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.326 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.586 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:40.586 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.586 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:40.586 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:40.586 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.586 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.586 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.586 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.586 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.586 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.586 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.586 "name": "raid_bdev1", 00:12:40.586 "uuid": "c0eabde8-5ce7-48dd-9e90-fb6f6599afdd", 00:12:40.586 "strip_size_kb": 0, 00:12:40.586 "state": "online", 00:12:40.586 "raid_level": "raid1", 00:12:40.586 "superblock": true, 00:12:40.586 "num_base_bdevs": 2, 00:12:40.586 "num_base_bdevs_discovered": 1, 00:12:40.586 "num_base_bdevs_operational": 1, 00:12:40.586 "base_bdevs_list": [ 00:12:40.586 { 00:12:40.586 "name": null, 00:12:40.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.586 "is_configured": false, 00:12:40.586 "data_offset": 0, 00:12:40.586 "data_size": 63488 00:12:40.586 }, 00:12:40.586 { 00:12:40.586 "name": "BaseBdev2", 00:12:40.586 "uuid": "44a28021-0ff3-5595-9b9b-64e02320bf80", 00:12:40.586 "is_configured": true, 00:12:40.586 "data_offset": 2048, 00:12:40.586 "data_size": 63488 00:12:40.586 } 00:12:40.586 ] 00:12:40.586 }' 00:12:40.586 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.586 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:40.586 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.846 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:40.846 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:40.846 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:12:40.846 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:40.846 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:40.846 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:40.846 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:40.846 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:40.847 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:40.847 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.847 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.847 [2024-11-21 04:10:40.583601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:40.847 [2024-11-21 04:10:40.583868] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:40.847 [2024-11-21 04:10:40.583934] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:40.847 request: 00:12:40.847 { 00:12:40.847 "base_bdev": "BaseBdev1", 00:12:40.847 "raid_bdev": "raid_bdev1", 00:12:40.847 "method": "bdev_raid_add_base_bdev", 00:12:40.847 "req_id": 1 00:12:40.847 } 00:12:40.847 Got JSON-RPC error response 00:12:40.847 response: 00:12:40.847 { 00:12:40.847 "code": -22, 00:12:40.847 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:40.847 } 00:12:40.847 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:40.847 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:12:40.847 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:40.847 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:40.847 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:40.847 04:10:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:41.787 04:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:41.787 04:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.787 04:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.787 04:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.787 04:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.787 04:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:41.787 04:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.787 04:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.787 04:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.787 04:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.787 04:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.787 04:10:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.787 04:10:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.787 04:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.787 04:10:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.787 04:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.787 "name": "raid_bdev1", 00:12:41.787 "uuid": "c0eabde8-5ce7-48dd-9e90-fb6f6599afdd", 00:12:41.787 "strip_size_kb": 0, 00:12:41.787 "state": "online", 00:12:41.787 "raid_level": "raid1", 00:12:41.787 "superblock": true, 00:12:41.787 "num_base_bdevs": 2, 00:12:41.787 "num_base_bdevs_discovered": 1, 00:12:41.787 "num_base_bdevs_operational": 1, 00:12:41.787 "base_bdevs_list": [ 00:12:41.787 { 00:12:41.787 "name": null, 00:12:41.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.787 "is_configured": false, 00:12:41.787 "data_offset": 0, 00:12:41.787 "data_size": 63488 00:12:41.787 }, 00:12:41.787 { 00:12:41.787 "name": "BaseBdev2", 00:12:41.787 "uuid": "44a28021-0ff3-5595-9b9b-64e02320bf80", 00:12:41.787 "is_configured": true, 00:12:41.787 "data_offset": 2048, 00:12:41.787 "data_size": 63488 00:12:41.787 } 00:12:41.787 ] 00:12:41.787 }' 00:12:41.787 04:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.787 04:10:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.047 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:42.047 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.047 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:42.048 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:42.308 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.308 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.308 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.308 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.308 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.308 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.308 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.308 "name": "raid_bdev1", 00:12:42.308 "uuid": "c0eabde8-5ce7-48dd-9e90-fb6f6599afdd", 00:12:42.308 "strip_size_kb": 0, 00:12:42.308 "state": "online", 00:12:42.308 "raid_level": "raid1", 00:12:42.308 "superblock": true, 00:12:42.308 "num_base_bdevs": 2, 00:12:42.308 "num_base_bdevs_discovered": 1, 00:12:42.308 "num_base_bdevs_operational": 1, 00:12:42.308 "base_bdevs_list": [ 00:12:42.308 { 00:12:42.308 "name": null, 00:12:42.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.308 "is_configured": false, 00:12:42.308 "data_offset": 0, 00:12:42.308 "data_size": 63488 00:12:42.308 }, 00:12:42.308 { 00:12:42.308 "name": "BaseBdev2", 00:12:42.308 "uuid": "44a28021-0ff3-5595-9b9b-64e02320bf80", 00:12:42.308 "is_configured": true, 00:12:42.308 "data_offset": 2048, 00:12:42.308 "data_size": 63488 00:12:42.308 } 00:12:42.308 ] 00:12:42.308 }' 00:12:42.308 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.308 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:42.308 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.308 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:42.308 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 87549 00:12:42.308 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 87549 ']' 00:12:42.308 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 87549 00:12:42.308 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:12:42.308 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.308 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87549 00:12:42.308 killing process with pid 87549 00:12:42.308 Received shutdown signal, test time was about 16.812422 seconds 00:12:42.308 00:12:42.308 Latency(us) 00:12:42.308 [2024-11-21T04:10:42.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.308 [2024-11-21T04:10:42.281Z] =================================================================================================================== 00:12:42.308 [2024-11-21T04:10:42.281Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:42.308 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:42.308 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:42.308 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87549' 00:12:42.308 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 87549 00:12:42.308 [2024-11-21 04:10:42.150690] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:42.308 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 87549 00:12:42.308 [2024-11-21 04:10:42.150848] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:42.308 [2024-11-21 04:10:42.150914] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:42.308 [2024-11-21 04:10:42.150927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:12:42.308 [2024-11-21 04:10:42.198477] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:42.568 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:42.568 00:12:42.568 real 0m18.840s 00:12:42.568 user 0m24.941s 00:12:42.568 sys 0m2.367s 00:12:42.568 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.568 ************************************ 00:12:42.568 END TEST raid_rebuild_test_sb_io 00:12:42.568 ************************************ 00:12:42.568 04:10:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.828 04:10:42 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:42.828 04:10:42 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:12:42.828 04:10:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:42.828 04:10:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.828 04:10:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:42.828 ************************************ 00:12:42.828 START TEST raid_rebuild_test 00:12:42.828 ************************************ 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:42.828 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:42.829 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:42.829 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:42.829 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:42.829 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=88231 00:12:42.829 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:42.829 04:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 88231 00:12:42.829 04:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 88231 ']' 00:12:42.829 04:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.829 04:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.829 04:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.829 04:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.829 04:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.829 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:42.829 Zero copy mechanism will not be used. 00:12:42.829 [2024-11-21 04:10:42.698297] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:12:42.829 [2024-11-21 04:10:42.698432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88231 ] 00:12:43.088 [2024-11-21 04:10:42.855055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.088 [2024-11-21 04:10:42.893897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.088 [2024-11-21 04:10:42.969775] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.088 [2024-11-21 04:10:42.969816] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.658 BaseBdev1_malloc 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.658 [2024-11-21 04:10:43.539387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:43.658 [2024-11-21 04:10:43.539456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.658 [2024-11-21 04:10:43.539487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:43.658 [2024-11-21 04:10:43.539507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.658 [2024-11-21 04:10:43.541962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.658 [2024-11-21 04:10:43.542092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:43.658 BaseBdev1 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.658 BaseBdev2_malloc 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.658 [2024-11-21 04:10:43.573712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:43.658 [2024-11-21 04:10:43.573760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.658 [2024-11-21 04:10:43.573783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:43.658 [2024-11-21 04:10:43.573791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.658 [2024-11-21 04:10:43.576184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.658 [2024-11-21 04:10:43.576235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:43.658 BaseBdev2 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.658 BaseBdev3_malloc 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.658 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.658 [2024-11-21 04:10:43.608208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:43.658 [2024-11-21 04:10:43.608280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.658 [2024-11-21 04:10:43.608305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:43.658 [2024-11-21 04:10:43.608314] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.659 [2024-11-21 04:10:43.610748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.659 [2024-11-21 04:10:43.610783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:43.659 BaseBdev3 00:12:43.659 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.659 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:43.659 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:43.659 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.659 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.919 BaseBdev4_malloc 00:12:43.919 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.919 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:43.919 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.919 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.919 [2024-11-21 04:10:43.657911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:43.919 [2024-11-21 04:10:43.658005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.919 [2024-11-21 04:10:43.658054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:43.919 [2024-11-21 04:10:43.658075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.919 [2024-11-21 04:10:43.661779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.919 [2024-11-21 04:10:43.661914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:43.919 BaseBdev4 00:12:43.919 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.919 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:43.919 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.919 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.919 spare_malloc 00:12:43.919 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.919 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:43.919 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.919 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.919 spare_delay 00:12:43.919 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.919 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:43.919 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.919 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.919 [2024-11-21 04:10:43.704619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:43.919 [2024-11-21 04:10:43.704662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.919 [2024-11-21 04:10:43.704682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:43.919 [2024-11-21 04:10:43.704692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.919 [2024-11-21 04:10:43.707165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.919 [2024-11-21 04:10:43.707202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:43.919 spare 00:12:43.919 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.919 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:43.919 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.919 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.919 [2024-11-21 04:10:43.716607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:43.919 [2024-11-21 04:10:43.718756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:43.919 [2024-11-21 04:10:43.718863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:43.919 [2024-11-21 04:10:43.718945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:43.919 [2024-11-21 04:10:43.719077] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:43.919 [2024-11-21 04:10:43.719127] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:43.919 [2024-11-21 04:10:43.719454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:43.920 [2024-11-21 04:10:43.719644] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:43.920 [2024-11-21 04:10:43.719691] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:43.920 [2024-11-21 04:10:43.719884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.920 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.920 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:43.920 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.920 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.920 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.920 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.920 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.920 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.920 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.920 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.920 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.920 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.920 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.920 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.920 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.920 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.920 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.920 "name": "raid_bdev1", 00:12:43.920 "uuid": "26557b76-b5d6-448f-a7b2-70dfcf29d149", 00:12:43.920 "strip_size_kb": 0, 00:12:43.920 "state": "online", 00:12:43.920 "raid_level": "raid1", 00:12:43.920 "superblock": false, 00:12:43.920 "num_base_bdevs": 4, 00:12:43.920 "num_base_bdevs_discovered": 4, 00:12:43.920 "num_base_bdevs_operational": 4, 00:12:43.920 "base_bdevs_list": [ 00:12:43.920 { 00:12:43.920 "name": "BaseBdev1", 00:12:43.920 "uuid": "4eba8bba-d31a-54b9-a9a6-c689798697f3", 00:12:43.920 "is_configured": true, 00:12:43.920 "data_offset": 0, 00:12:43.920 "data_size": 65536 00:12:43.920 }, 00:12:43.920 { 00:12:43.920 "name": "BaseBdev2", 00:12:43.920 "uuid": "46365886-66f3-549a-8296-3c52be38ef09", 00:12:43.920 "is_configured": true, 00:12:43.920 "data_offset": 0, 00:12:43.920 "data_size": 65536 00:12:43.920 }, 00:12:43.920 { 00:12:43.920 "name": "BaseBdev3", 00:12:43.920 "uuid": "3341d54d-69e4-54fd-b4ae-6645f1a21fe4", 00:12:43.920 "is_configured": true, 00:12:43.920 "data_offset": 0, 00:12:43.920 "data_size": 65536 00:12:43.920 }, 00:12:43.920 { 00:12:43.920 "name": "BaseBdev4", 00:12:43.920 "uuid": "1917f126-1667-556a-98ea-47187015fb11", 00:12:43.920 "is_configured": true, 00:12:43.920 "data_offset": 0, 00:12:43.920 "data_size": 65536 00:12:43.920 } 00:12:43.920 ] 00:12:43.920 }' 00:12:43.920 04:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.920 04:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.541 04:10:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:44.541 04:10:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:44.541 04:10:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.541 04:10:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.541 [2024-11-21 04:10:44.176220] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:44.541 04:10:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.541 04:10:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:44.541 04:10:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:44.541 04:10:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.541 04:10:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.541 04:10:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.541 04:10:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.541 04:10:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:44.541 04:10:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:44.541 04:10:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:44.541 04:10:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:44.541 04:10:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:44.541 04:10:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:44.541 04:10:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:44.541 04:10:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:44.541 04:10:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:44.541 04:10:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:44.541 04:10:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:44.541 04:10:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:44.541 04:10:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:44.541 04:10:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:44.541 [2024-11-21 04:10:44.431597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:12:44.541 /dev/nbd0 00:12:44.828 04:10:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:44.828 04:10:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:44.828 04:10:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:44.828 04:10:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:44.828 04:10:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:44.828 04:10:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:44.828 04:10:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:44.828 04:10:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:44.828 04:10:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:44.828 04:10:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:44.828 04:10:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.828 1+0 records in 00:12:44.828 1+0 records out 00:12:44.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00058667 s, 7.0 MB/s 00:12:44.828 04:10:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.828 04:10:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:44.828 04:10:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.828 04:10:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:44.828 04:10:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:44.828 04:10:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.828 04:10:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:44.828 04:10:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:44.828 04:10:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:44.828 04:10:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:51.406 65536+0 records in 00:12:51.406 65536+0 records out 00:12:51.406 33554432 bytes (34 MB, 32 MiB) copied, 5.80589 s, 5.8 MB/s 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:51.406 [2024-11-21 04:10:50.503131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.406 [2024-11-21 04:10:50.535135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.406 04:10:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.406 "name": "raid_bdev1", 00:12:51.406 "uuid": "26557b76-b5d6-448f-a7b2-70dfcf29d149", 00:12:51.406 "strip_size_kb": 0, 00:12:51.406 "state": "online", 00:12:51.406 "raid_level": "raid1", 00:12:51.406 "superblock": false, 00:12:51.406 "num_base_bdevs": 4, 00:12:51.406 "num_base_bdevs_discovered": 3, 00:12:51.406 "num_base_bdevs_operational": 3, 00:12:51.406 "base_bdevs_list": [ 00:12:51.406 { 00:12:51.406 "name": null, 00:12:51.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.406 "is_configured": false, 00:12:51.406 "data_offset": 0, 00:12:51.406 "data_size": 65536 00:12:51.406 }, 00:12:51.406 { 00:12:51.406 "name": "BaseBdev2", 00:12:51.406 "uuid": "46365886-66f3-549a-8296-3c52be38ef09", 00:12:51.406 "is_configured": true, 00:12:51.406 "data_offset": 0, 00:12:51.406 "data_size": 65536 00:12:51.406 }, 00:12:51.406 { 00:12:51.406 "name": "BaseBdev3", 00:12:51.407 "uuid": "3341d54d-69e4-54fd-b4ae-6645f1a21fe4", 00:12:51.407 "is_configured": true, 00:12:51.407 "data_offset": 0, 00:12:51.407 "data_size": 65536 00:12:51.407 }, 00:12:51.407 { 00:12:51.407 "name": "BaseBdev4", 00:12:51.407 "uuid": "1917f126-1667-556a-98ea-47187015fb11", 00:12:51.407 "is_configured": true, 00:12:51.407 "data_offset": 0, 00:12:51.407 "data_size": 65536 00:12:51.407 } 00:12:51.407 ] 00:12:51.407 }' 00:12:51.407 04:10:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.407 04:10:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.407 04:10:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:51.407 04:10:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.407 04:10:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.407 [2024-11-21 04:10:50.974374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:51.407 [2024-11-21 04:10:50.981628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d063c0 00:12:51.407 04:10:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.407 04:10:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:51.407 [2024-11-21 04:10:50.983966] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:52.345 04:10:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.345 04:10:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.345 04:10:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.345 04:10:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.345 04:10:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.345 04:10:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.345 04:10:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.345 04:10:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.345 04:10:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.345 04:10:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.345 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.345 "name": "raid_bdev1", 00:12:52.345 "uuid": "26557b76-b5d6-448f-a7b2-70dfcf29d149", 00:12:52.345 "strip_size_kb": 0, 00:12:52.345 "state": "online", 00:12:52.345 "raid_level": "raid1", 00:12:52.345 "superblock": false, 00:12:52.345 "num_base_bdevs": 4, 00:12:52.345 "num_base_bdevs_discovered": 4, 00:12:52.345 "num_base_bdevs_operational": 4, 00:12:52.345 "process": { 00:12:52.345 "type": "rebuild", 00:12:52.345 "target": "spare", 00:12:52.345 "progress": { 00:12:52.345 "blocks": 20480, 00:12:52.345 "percent": 31 00:12:52.345 } 00:12:52.345 }, 00:12:52.345 "base_bdevs_list": [ 00:12:52.345 { 00:12:52.345 "name": "spare", 00:12:52.345 "uuid": "ab700a0b-abbd-59db-8e5f-5c39697755a1", 00:12:52.345 "is_configured": true, 00:12:52.345 "data_offset": 0, 00:12:52.345 "data_size": 65536 00:12:52.345 }, 00:12:52.345 { 00:12:52.345 "name": "BaseBdev2", 00:12:52.345 "uuid": "46365886-66f3-549a-8296-3c52be38ef09", 00:12:52.345 "is_configured": true, 00:12:52.345 "data_offset": 0, 00:12:52.345 "data_size": 65536 00:12:52.345 }, 00:12:52.345 { 00:12:52.345 "name": "BaseBdev3", 00:12:52.345 "uuid": "3341d54d-69e4-54fd-b4ae-6645f1a21fe4", 00:12:52.345 "is_configured": true, 00:12:52.345 "data_offset": 0, 00:12:52.345 "data_size": 65536 00:12:52.345 }, 00:12:52.345 { 00:12:52.345 "name": "BaseBdev4", 00:12:52.345 "uuid": "1917f126-1667-556a-98ea-47187015fb11", 00:12:52.345 "is_configured": true, 00:12:52.345 "data_offset": 0, 00:12:52.345 "data_size": 65536 00:12:52.345 } 00:12:52.345 ] 00:12:52.345 }' 00:12:52.345 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.345 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.345 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.345 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.346 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:52.346 04:10:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.346 04:10:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 [2024-11-21 04:10:52.128531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:52.346 [2024-11-21 04:10:52.192383] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:52.346 [2024-11-21 04:10:52.192501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.346 [2024-11-21 04:10:52.192523] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:52.346 [2024-11-21 04:10:52.192531] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:52.346 04:10:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.346 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:52.346 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.346 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.346 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.346 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.346 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.346 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.346 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.346 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.346 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.346 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.346 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.346 04:10:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.346 04:10:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 04:10:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.346 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.346 "name": "raid_bdev1", 00:12:52.346 "uuid": "26557b76-b5d6-448f-a7b2-70dfcf29d149", 00:12:52.346 "strip_size_kb": 0, 00:12:52.346 "state": "online", 00:12:52.346 "raid_level": "raid1", 00:12:52.346 "superblock": false, 00:12:52.346 "num_base_bdevs": 4, 00:12:52.346 "num_base_bdevs_discovered": 3, 00:12:52.346 "num_base_bdevs_operational": 3, 00:12:52.346 "base_bdevs_list": [ 00:12:52.346 { 00:12:52.346 "name": null, 00:12:52.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.346 "is_configured": false, 00:12:52.346 "data_offset": 0, 00:12:52.346 "data_size": 65536 00:12:52.346 }, 00:12:52.346 { 00:12:52.346 "name": "BaseBdev2", 00:12:52.346 "uuid": "46365886-66f3-549a-8296-3c52be38ef09", 00:12:52.346 "is_configured": true, 00:12:52.346 "data_offset": 0, 00:12:52.346 "data_size": 65536 00:12:52.346 }, 00:12:52.346 { 00:12:52.346 "name": "BaseBdev3", 00:12:52.346 "uuid": "3341d54d-69e4-54fd-b4ae-6645f1a21fe4", 00:12:52.346 "is_configured": true, 00:12:52.346 "data_offset": 0, 00:12:52.346 "data_size": 65536 00:12:52.346 }, 00:12:52.346 { 00:12:52.346 "name": "BaseBdev4", 00:12:52.346 "uuid": "1917f126-1667-556a-98ea-47187015fb11", 00:12:52.346 "is_configured": true, 00:12:52.346 "data_offset": 0, 00:12:52.346 "data_size": 65536 00:12:52.346 } 00:12:52.346 ] 00:12:52.346 }' 00:12:52.346 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.346 04:10:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.915 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:52.915 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.915 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:52.915 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:52.915 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.915 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.915 04:10:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.915 04:10:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.915 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.915 04:10:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.915 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.915 "name": "raid_bdev1", 00:12:52.915 "uuid": "26557b76-b5d6-448f-a7b2-70dfcf29d149", 00:12:52.915 "strip_size_kb": 0, 00:12:52.915 "state": "online", 00:12:52.915 "raid_level": "raid1", 00:12:52.915 "superblock": false, 00:12:52.915 "num_base_bdevs": 4, 00:12:52.915 "num_base_bdevs_discovered": 3, 00:12:52.915 "num_base_bdevs_operational": 3, 00:12:52.915 "base_bdevs_list": [ 00:12:52.915 { 00:12:52.915 "name": null, 00:12:52.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.915 "is_configured": false, 00:12:52.915 "data_offset": 0, 00:12:52.915 "data_size": 65536 00:12:52.915 }, 00:12:52.915 { 00:12:52.915 "name": "BaseBdev2", 00:12:52.915 "uuid": "46365886-66f3-549a-8296-3c52be38ef09", 00:12:52.915 "is_configured": true, 00:12:52.915 "data_offset": 0, 00:12:52.915 "data_size": 65536 00:12:52.915 }, 00:12:52.915 { 00:12:52.915 "name": "BaseBdev3", 00:12:52.915 "uuid": "3341d54d-69e4-54fd-b4ae-6645f1a21fe4", 00:12:52.915 "is_configured": true, 00:12:52.915 "data_offset": 0, 00:12:52.915 "data_size": 65536 00:12:52.915 }, 00:12:52.915 { 00:12:52.915 "name": "BaseBdev4", 00:12:52.915 "uuid": "1917f126-1667-556a-98ea-47187015fb11", 00:12:52.915 "is_configured": true, 00:12:52.915 "data_offset": 0, 00:12:52.915 "data_size": 65536 00:12:52.915 } 00:12:52.915 ] 00:12:52.915 }' 00:12:52.915 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.915 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:52.915 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.915 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:52.915 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:52.915 04:10:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.915 04:10:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.915 [2024-11-21 04:10:52.739184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:52.915 [2024-11-21 04:10:52.746096] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06490 00:12:52.915 04:10:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.915 04:10:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:52.915 [2024-11-21 04:10:52.748439] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:53.855 04:10:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:53.855 04:10:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.855 04:10:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:53.855 04:10:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:53.855 04:10:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.855 04:10:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.855 04:10:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.855 04:10:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.855 04:10:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.855 04:10:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.855 04:10:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.855 "name": "raid_bdev1", 00:12:53.855 "uuid": "26557b76-b5d6-448f-a7b2-70dfcf29d149", 00:12:53.855 "strip_size_kb": 0, 00:12:53.855 "state": "online", 00:12:53.855 "raid_level": "raid1", 00:12:53.855 "superblock": false, 00:12:53.855 "num_base_bdevs": 4, 00:12:53.855 "num_base_bdevs_discovered": 4, 00:12:53.855 "num_base_bdevs_operational": 4, 00:12:53.855 "process": { 00:12:53.855 "type": "rebuild", 00:12:53.855 "target": "spare", 00:12:53.855 "progress": { 00:12:53.855 "blocks": 20480, 00:12:53.855 "percent": 31 00:12:53.855 } 00:12:53.855 }, 00:12:53.855 "base_bdevs_list": [ 00:12:53.855 { 00:12:53.855 "name": "spare", 00:12:53.855 "uuid": "ab700a0b-abbd-59db-8e5f-5c39697755a1", 00:12:53.855 "is_configured": true, 00:12:53.855 "data_offset": 0, 00:12:53.855 "data_size": 65536 00:12:53.855 }, 00:12:53.855 { 00:12:53.855 "name": "BaseBdev2", 00:12:53.855 "uuid": "46365886-66f3-549a-8296-3c52be38ef09", 00:12:53.855 "is_configured": true, 00:12:53.855 "data_offset": 0, 00:12:53.855 "data_size": 65536 00:12:53.855 }, 00:12:53.855 { 00:12:53.855 "name": "BaseBdev3", 00:12:53.855 "uuid": "3341d54d-69e4-54fd-b4ae-6645f1a21fe4", 00:12:53.855 "is_configured": true, 00:12:53.855 "data_offset": 0, 00:12:53.855 "data_size": 65536 00:12:53.855 }, 00:12:53.855 { 00:12:53.855 "name": "BaseBdev4", 00:12:53.855 "uuid": "1917f126-1667-556a-98ea-47187015fb11", 00:12:53.855 "is_configured": true, 00:12:53.855 "data_offset": 0, 00:12:53.856 "data_size": 65536 00:12:53.856 } 00:12:53.856 ] 00:12:53.856 }' 00:12:53.856 04:10:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.116 04:10:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.116 04:10:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.116 04:10:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.116 04:10:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:54.116 04:10:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:54.116 04:10:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:54.116 04:10:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:54.116 04:10:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:54.116 04:10:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.116 04:10:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.116 [2024-11-21 04:10:53.908399] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:54.116 [2024-11-21 04:10:53.956184] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d06490 00:12:54.116 04:10:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.116 04:10:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:54.116 04:10:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:54.116 04:10:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.116 04:10:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.116 04:10:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.116 04:10:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.116 04:10:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.116 04:10:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.116 04:10:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.116 04:10:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.116 04:10:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.116 04:10:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.116 04:10:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.116 "name": "raid_bdev1", 00:12:54.116 "uuid": "26557b76-b5d6-448f-a7b2-70dfcf29d149", 00:12:54.116 "strip_size_kb": 0, 00:12:54.116 "state": "online", 00:12:54.116 "raid_level": "raid1", 00:12:54.116 "superblock": false, 00:12:54.116 "num_base_bdevs": 4, 00:12:54.116 "num_base_bdevs_discovered": 3, 00:12:54.116 "num_base_bdevs_operational": 3, 00:12:54.116 "process": { 00:12:54.116 "type": "rebuild", 00:12:54.116 "target": "spare", 00:12:54.116 "progress": { 00:12:54.116 "blocks": 24576, 00:12:54.116 "percent": 37 00:12:54.116 } 00:12:54.116 }, 00:12:54.116 "base_bdevs_list": [ 00:12:54.116 { 00:12:54.116 "name": "spare", 00:12:54.116 "uuid": "ab700a0b-abbd-59db-8e5f-5c39697755a1", 00:12:54.116 "is_configured": true, 00:12:54.116 "data_offset": 0, 00:12:54.116 "data_size": 65536 00:12:54.116 }, 00:12:54.116 { 00:12:54.116 "name": null, 00:12:54.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.116 "is_configured": false, 00:12:54.116 "data_offset": 0, 00:12:54.116 "data_size": 65536 00:12:54.116 }, 00:12:54.116 { 00:12:54.116 "name": "BaseBdev3", 00:12:54.116 "uuid": "3341d54d-69e4-54fd-b4ae-6645f1a21fe4", 00:12:54.116 "is_configured": true, 00:12:54.116 "data_offset": 0, 00:12:54.116 "data_size": 65536 00:12:54.116 }, 00:12:54.116 { 00:12:54.116 "name": "BaseBdev4", 00:12:54.116 "uuid": "1917f126-1667-556a-98ea-47187015fb11", 00:12:54.116 "is_configured": true, 00:12:54.116 "data_offset": 0, 00:12:54.116 "data_size": 65536 00:12:54.116 } 00:12:54.116 ] 00:12:54.116 }' 00:12:54.116 04:10:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.116 04:10:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.116 04:10:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.376 04:10:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.376 04:10:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=370 00:12:54.376 04:10:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:54.376 04:10:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.376 04:10:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.376 04:10:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.376 04:10:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.376 04:10:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.376 04:10:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.376 04:10:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.376 04:10:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.376 04:10:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.376 04:10:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.376 04:10:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.376 "name": "raid_bdev1", 00:12:54.376 "uuid": "26557b76-b5d6-448f-a7b2-70dfcf29d149", 00:12:54.376 "strip_size_kb": 0, 00:12:54.376 "state": "online", 00:12:54.376 "raid_level": "raid1", 00:12:54.377 "superblock": false, 00:12:54.377 "num_base_bdevs": 4, 00:12:54.377 "num_base_bdevs_discovered": 3, 00:12:54.377 "num_base_bdevs_operational": 3, 00:12:54.377 "process": { 00:12:54.377 "type": "rebuild", 00:12:54.377 "target": "spare", 00:12:54.377 "progress": { 00:12:54.377 "blocks": 26624, 00:12:54.377 "percent": 40 00:12:54.377 } 00:12:54.377 }, 00:12:54.377 "base_bdevs_list": [ 00:12:54.377 { 00:12:54.377 "name": "spare", 00:12:54.377 "uuid": "ab700a0b-abbd-59db-8e5f-5c39697755a1", 00:12:54.377 "is_configured": true, 00:12:54.377 "data_offset": 0, 00:12:54.377 "data_size": 65536 00:12:54.377 }, 00:12:54.377 { 00:12:54.377 "name": null, 00:12:54.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.377 "is_configured": false, 00:12:54.377 "data_offset": 0, 00:12:54.377 "data_size": 65536 00:12:54.377 }, 00:12:54.377 { 00:12:54.377 "name": "BaseBdev3", 00:12:54.377 "uuid": "3341d54d-69e4-54fd-b4ae-6645f1a21fe4", 00:12:54.377 "is_configured": true, 00:12:54.377 "data_offset": 0, 00:12:54.377 "data_size": 65536 00:12:54.377 }, 00:12:54.377 { 00:12:54.377 "name": "BaseBdev4", 00:12:54.377 "uuid": "1917f126-1667-556a-98ea-47187015fb11", 00:12:54.377 "is_configured": true, 00:12:54.377 "data_offset": 0, 00:12:54.377 "data_size": 65536 00:12:54.377 } 00:12:54.377 ] 00:12:54.377 }' 00:12:54.377 04:10:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.377 04:10:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.377 04:10:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.377 04:10:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.377 04:10:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:55.316 04:10:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:55.316 04:10:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:55.316 04:10:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.316 04:10:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:55.316 04:10:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:55.316 04:10:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.316 04:10:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.316 04:10:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.316 04:10:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.316 04:10:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.316 04:10:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.576 04:10:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.576 "name": "raid_bdev1", 00:12:55.576 "uuid": "26557b76-b5d6-448f-a7b2-70dfcf29d149", 00:12:55.576 "strip_size_kb": 0, 00:12:55.576 "state": "online", 00:12:55.576 "raid_level": "raid1", 00:12:55.576 "superblock": false, 00:12:55.576 "num_base_bdevs": 4, 00:12:55.576 "num_base_bdevs_discovered": 3, 00:12:55.576 "num_base_bdevs_operational": 3, 00:12:55.576 "process": { 00:12:55.576 "type": "rebuild", 00:12:55.576 "target": "spare", 00:12:55.576 "progress": { 00:12:55.576 "blocks": 49152, 00:12:55.576 "percent": 75 00:12:55.576 } 00:12:55.576 }, 00:12:55.576 "base_bdevs_list": [ 00:12:55.576 { 00:12:55.576 "name": "spare", 00:12:55.576 "uuid": "ab700a0b-abbd-59db-8e5f-5c39697755a1", 00:12:55.576 "is_configured": true, 00:12:55.576 "data_offset": 0, 00:12:55.576 "data_size": 65536 00:12:55.576 }, 00:12:55.576 { 00:12:55.576 "name": null, 00:12:55.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.576 "is_configured": false, 00:12:55.576 "data_offset": 0, 00:12:55.576 "data_size": 65536 00:12:55.576 }, 00:12:55.576 { 00:12:55.576 "name": "BaseBdev3", 00:12:55.576 "uuid": "3341d54d-69e4-54fd-b4ae-6645f1a21fe4", 00:12:55.576 "is_configured": true, 00:12:55.576 "data_offset": 0, 00:12:55.576 "data_size": 65536 00:12:55.576 }, 00:12:55.576 { 00:12:55.576 "name": "BaseBdev4", 00:12:55.576 "uuid": "1917f126-1667-556a-98ea-47187015fb11", 00:12:55.576 "is_configured": true, 00:12:55.576 "data_offset": 0, 00:12:55.576 "data_size": 65536 00:12:55.576 } 00:12:55.576 ] 00:12:55.576 }' 00:12:55.576 04:10:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.576 04:10:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:55.576 04:10:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.576 04:10:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:55.576 04:10:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:56.146 [2024-11-21 04:10:55.969370] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:56.146 [2024-11-21 04:10:55.969462] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:56.146 [2024-11-21 04:10:55.969523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.715 "name": "raid_bdev1", 00:12:56.715 "uuid": "26557b76-b5d6-448f-a7b2-70dfcf29d149", 00:12:56.715 "strip_size_kb": 0, 00:12:56.715 "state": "online", 00:12:56.715 "raid_level": "raid1", 00:12:56.715 "superblock": false, 00:12:56.715 "num_base_bdevs": 4, 00:12:56.715 "num_base_bdevs_discovered": 3, 00:12:56.715 "num_base_bdevs_operational": 3, 00:12:56.715 "base_bdevs_list": [ 00:12:56.715 { 00:12:56.715 "name": "spare", 00:12:56.715 "uuid": "ab700a0b-abbd-59db-8e5f-5c39697755a1", 00:12:56.715 "is_configured": true, 00:12:56.715 "data_offset": 0, 00:12:56.715 "data_size": 65536 00:12:56.715 }, 00:12:56.715 { 00:12:56.715 "name": null, 00:12:56.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.715 "is_configured": false, 00:12:56.715 "data_offset": 0, 00:12:56.715 "data_size": 65536 00:12:56.715 }, 00:12:56.715 { 00:12:56.715 "name": "BaseBdev3", 00:12:56.715 "uuid": "3341d54d-69e4-54fd-b4ae-6645f1a21fe4", 00:12:56.715 "is_configured": true, 00:12:56.715 "data_offset": 0, 00:12:56.715 "data_size": 65536 00:12:56.715 }, 00:12:56.715 { 00:12:56.715 "name": "BaseBdev4", 00:12:56.715 "uuid": "1917f126-1667-556a-98ea-47187015fb11", 00:12:56.715 "is_configured": true, 00:12:56.715 "data_offset": 0, 00:12:56.715 "data_size": 65536 00:12:56.715 } 00:12:56.715 ] 00:12:56.715 }' 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.715 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.715 "name": "raid_bdev1", 00:12:56.715 "uuid": "26557b76-b5d6-448f-a7b2-70dfcf29d149", 00:12:56.715 "strip_size_kb": 0, 00:12:56.715 "state": "online", 00:12:56.715 "raid_level": "raid1", 00:12:56.715 "superblock": false, 00:12:56.715 "num_base_bdevs": 4, 00:12:56.715 "num_base_bdevs_discovered": 3, 00:12:56.715 "num_base_bdevs_operational": 3, 00:12:56.715 "base_bdevs_list": [ 00:12:56.715 { 00:12:56.715 "name": "spare", 00:12:56.715 "uuid": "ab700a0b-abbd-59db-8e5f-5c39697755a1", 00:12:56.715 "is_configured": true, 00:12:56.715 "data_offset": 0, 00:12:56.715 "data_size": 65536 00:12:56.715 }, 00:12:56.715 { 00:12:56.715 "name": null, 00:12:56.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.715 "is_configured": false, 00:12:56.715 "data_offset": 0, 00:12:56.715 "data_size": 65536 00:12:56.715 }, 00:12:56.715 { 00:12:56.715 "name": "BaseBdev3", 00:12:56.715 "uuid": "3341d54d-69e4-54fd-b4ae-6645f1a21fe4", 00:12:56.715 "is_configured": true, 00:12:56.715 "data_offset": 0, 00:12:56.715 "data_size": 65536 00:12:56.715 }, 00:12:56.715 { 00:12:56.715 "name": "BaseBdev4", 00:12:56.715 "uuid": "1917f126-1667-556a-98ea-47187015fb11", 00:12:56.715 "is_configured": true, 00:12:56.715 "data_offset": 0, 00:12:56.715 "data_size": 65536 00:12:56.715 } 00:12:56.715 ] 00:12:56.715 }' 00:12:56.716 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.716 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:56.716 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.716 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:56.716 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:56.716 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.716 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.716 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.716 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.716 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.716 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.716 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.716 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.716 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.716 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.716 04:10:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.716 04:10:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.716 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.975 04:10:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.975 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.975 "name": "raid_bdev1", 00:12:56.975 "uuid": "26557b76-b5d6-448f-a7b2-70dfcf29d149", 00:12:56.975 "strip_size_kb": 0, 00:12:56.975 "state": "online", 00:12:56.975 "raid_level": "raid1", 00:12:56.975 "superblock": false, 00:12:56.975 "num_base_bdevs": 4, 00:12:56.975 "num_base_bdevs_discovered": 3, 00:12:56.975 "num_base_bdevs_operational": 3, 00:12:56.975 "base_bdevs_list": [ 00:12:56.975 { 00:12:56.975 "name": "spare", 00:12:56.975 "uuid": "ab700a0b-abbd-59db-8e5f-5c39697755a1", 00:12:56.975 "is_configured": true, 00:12:56.975 "data_offset": 0, 00:12:56.975 "data_size": 65536 00:12:56.975 }, 00:12:56.975 { 00:12:56.975 "name": null, 00:12:56.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.975 "is_configured": false, 00:12:56.975 "data_offset": 0, 00:12:56.975 "data_size": 65536 00:12:56.975 }, 00:12:56.975 { 00:12:56.975 "name": "BaseBdev3", 00:12:56.975 "uuid": "3341d54d-69e4-54fd-b4ae-6645f1a21fe4", 00:12:56.975 "is_configured": true, 00:12:56.975 "data_offset": 0, 00:12:56.975 "data_size": 65536 00:12:56.975 }, 00:12:56.975 { 00:12:56.975 "name": "BaseBdev4", 00:12:56.975 "uuid": "1917f126-1667-556a-98ea-47187015fb11", 00:12:56.975 "is_configured": true, 00:12:56.975 "data_offset": 0, 00:12:56.975 "data_size": 65536 00:12:56.975 } 00:12:56.975 ] 00:12:56.975 }' 00:12:56.975 04:10:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.975 04:10:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.235 04:10:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:57.235 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.235 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.235 [2024-11-21 04:10:57.071235] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:57.235 [2024-11-21 04:10:57.071270] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:57.235 [2024-11-21 04:10:57.071390] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:57.235 [2024-11-21 04:10:57.071480] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:57.235 [2024-11-21 04:10:57.071500] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:57.235 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.235 04:10:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.235 04:10:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:57.235 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.235 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.235 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.235 04:10:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:57.235 04:10:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:57.235 04:10:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:57.235 04:10:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:57.235 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:57.235 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:57.235 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:57.235 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:57.235 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:57.235 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:57.235 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:57.235 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:57.235 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:57.495 /dev/nbd0 00:12:57.495 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:57.495 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:57.495 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:57.495 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:57.495 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:57.495 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:57.495 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:57.495 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:57.495 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:57.495 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:57.495 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.495 1+0 records in 00:12:57.495 1+0 records out 00:12:57.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421336 s, 9.7 MB/s 00:12:57.495 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.495 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:57.495 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.495 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:57.495 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:57.495 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:57.495 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:57.495 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:57.766 /dev/nbd1 00:12:57.766 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:57.766 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:57.766 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:57.766 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:57.766 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:57.766 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:57.766 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:57.766 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:57.766 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:57.766 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:57.766 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.766 1+0 records in 00:12:57.766 1+0 records out 00:12:57.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409402 s, 10.0 MB/s 00:12:57.766 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.766 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:57.766 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.766 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:57.766 04:10:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:57.766 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:57.766 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:57.766 04:10:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:57.766 04:10:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:57.766 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:57.766 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:57.766 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:57.766 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:57.766 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.766 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:58.041 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:58.041 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:58.041 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:58.041 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.041 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.041 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:58.041 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:58.041 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.041 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.041 04:10:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:58.301 04:10:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:58.301 04:10:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:58.301 04:10:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:58.301 04:10:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.301 04:10:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.301 04:10:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:58.301 04:10:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:58.301 04:10:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.301 04:10:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:58.301 04:10:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 88231 00:12:58.301 04:10:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 88231 ']' 00:12:58.301 04:10:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 88231 00:12:58.301 04:10:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:58.301 04:10:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:58.301 04:10:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88231 00:12:58.301 killing process with pid 88231 00:12:58.301 Received shutdown signal, test time was about 60.000000 seconds 00:12:58.301 00:12:58.301 Latency(us) 00:12:58.301 [2024-11-21T04:10:58.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.301 [2024-11-21T04:10:58.274Z] =================================================================================================================== 00:12:58.301 [2024-11-21T04:10:58.274Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:58.301 04:10:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:58.301 04:10:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:58.301 04:10:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88231' 00:12:58.301 04:10:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 88231 00:12:58.301 [2024-11-21 04:10:58.177457] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:58.301 04:10:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 88231 00:12:58.302 [2024-11-21 04:10:58.268380] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:58.871 04:10:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:58.871 00:12:58.871 real 0m15.987s 00:12:58.871 user 0m17.480s 00:12:58.871 sys 0m3.285s 00:12:58.871 ************************************ 00:12:58.871 END TEST raid_rebuild_test 00:12:58.871 ************************************ 00:12:58.871 04:10:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.871 04:10:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.871 04:10:58 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:12:58.871 04:10:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:58.871 04:10:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.871 04:10:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:58.871 ************************************ 00:12:58.871 START TEST raid_rebuild_test_sb 00:12:58.871 ************************************ 00:12:58.871 04:10:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:12:58.871 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:58.871 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:58.871 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:58.871 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:58.871 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:58.871 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:58.871 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:58.871 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:58.871 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:58.871 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:58.871 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:58.871 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:58.871 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:58.871 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:58.871 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:58.872 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:58.872 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:58.872 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:58.872 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:58.872 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:58.872 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:58.872 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:58.872 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:58.872 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:58.872 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:58.872 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:58.872 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:58.872 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:58.872 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:58.872 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:58.872 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=88658 00:12:58.872 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:58.872 04:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 88658 00:12:58.872 04:10:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 88658 ']' 00:12:58.872 04:10:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.872 04:10:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.872 04:10:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.872 04:10:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.872 04:10:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.872 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:58.872 Zero copy mechanism will not be used. 00:12:58.872 [2024-11-21 04:10:58.758361] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:12:58.872 [2024-11-21 04:10:58.758559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88658 ] 00:12:59.132 [2024-11-21 04:10:58.913643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.132 [2024-11-21 04:10:58.952070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.132 [2024-11-21 04:10:59.028440] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.132 [2024-11-21 04:10:59.028578] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.701 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.701 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:59.701 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:59.701 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:59.701 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.701 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.701 BaseBdev1_malloc 00:12:59.701 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.701 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:59.701 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.701 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.702 [2024-11-21 04:10:59.610869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:59.702 [2024-11-21 04:10:59.610940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.702 [2024-11-21 04:10:59.610969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:59.702 [2024-11-21 04:10:59.610981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.702 [2024-11-21 04:10:59.613460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.702 [2024-11-21 04:10:59.613535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:59.702 BaseBdev1 00:12:59.702 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.702 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:59.702 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:59.702 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.702 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.702 BaseBdev2_malloc 00:12:59.702 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.702 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:59.702 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.702 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.702 [2024-11-21 04:10:59.645462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:59.702 [2024-11-21 04:10:59.645514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.702 [2024-11-21 04:10:59.645553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:59.702 [2024-11-21 04:10:59.645562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.702 [2024-11-21 04:10:59.648027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.702 [2024-11-21 04:10:59.648070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:59.702 BaseBdev2 00:12:59.702 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.702 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:59.702 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:59.702 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.702 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.962 BaseBdev3_malloc 00:12:59.962 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.962 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:59.962 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.962 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.962 [2024-11-21 04:10:59.680084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:59.962 [2024-11-21 04:10:59.680186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.962 [2024-11-21 04:10:59.680211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:59.962 [2024-11-21 04:10:59.680220] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.962 [2024-11-21 04:10:59.682670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.962 [2024-11-21 04:10:59.682764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:59.962 BaseBdev3 00:12:59.962 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.962 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:59.962 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:59.962 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.963 BaseBdev4_malloc 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.963 [2024-11-21 04:10:59.729978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:59.963 [2024-11-21 04:10:59.730152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.963 [2024-11-21 04:10:59.730253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:59.963 [2024-11-21 04:10:59.730277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.963 [2024-11-21 04:10:59.734373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.963 [2024-11-21 04:10:59.734434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:59.963 BaseBdev4 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.963 spare_malloc 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.963 spare_delay 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.963 [2024-11-21 04:10:59.777254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:59.963 [2024-11-21 04:10:59.777306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.963 [2024-11-21 04:10:59.777335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:59.963 [2024-11-21 04:10:59.777344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.963 [2024-11-21 04:10:59.779741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.963 [2024-11-21 04:10:59.779839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:59.963 spare 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.963 [2024-11-21 04:10:59.789320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:59.963 [2024-11-21 04:10:59.791393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:59.963 [2024-11-21 04:10:59.791491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:59.963 [2024-11-21 04:10:59.791601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:59.963 [2024-11-21 04:10:59.791830] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:59.963 [2024-11-21 04:10:59.791883] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:59.963 [2024-11-21 04:10:59.792172] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:59.963 [2024-11-21 04:10:59.792387] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:59.963 [2024-11-21 04:10:59.792436] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:59.963 [2024-11-21 04:10:59.792633] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.963 "name": "raid_bdev1", 00:12:59.963 "uuid": "16289533-7ae1-4ed7-b6b6-57e627dc4fd6", 00:12:59.963 "strip_size_kb": 0, 00:12:59.963 "state": "online", 00:12:59.963 "raid_level": "raid1", 00:12:59.963 "superblock": true, 00:12:59.963 "num_base_bdevs": 4, 00:12:59.963 "num_base_bdevs_discovered": 4, 00:12:59.963 "num_base_bdevs_operational": 4, 00:12:59.963 "base_bdevs_list": [ 00:12:59.963 { 00:12:59.963 "name": "BaseBdev1", 00:12:59.963 "uuid": "315ada00-38b6-5239-8b71-beaec86c852f", 00:12:59.963 "is_configured": true, 00:12:59.963 "data_offset": 2048, 00:12:59.963 "data_size": 63488 00:12:59.963 }, 00:12:59.963 { 00:12:59.963 "name": "BaseBdev2", 00:12:59.963 "uuid": "71ae4774-3bfc-554c-b99d-7142259ead3d", 00:12:59.963 "is_configured": true, 00:12:59.963 "data_offset": 2048, 00:12:59.963 "data_size": 63488 00:12:59.963 }, 00:12:59.963 { 00:12:59.963 "name": "BaseBdev3", 00:12:59.963 "uuid": "14e4700d-aea7-5558-b677-38534c13fa80", 00:12:59.963 "is_configured": true, 00:12:59.963 "data_offset": 2048, 00:12:59.963 "data_size": 63488 00:12:59.963 }, 00:12:59.963 { 00:12:59.963 "name": "BaseBdev4", 00:12:59.963 "uuid": "87d8d179-a96f-55b7-8c56-fd01f6245b96", 00:12:59.963 "is_configured": true, 00:12:59.963 "data_offset": 2048, 00:12:59.963 "data_size": 63488 00:12:59.963 } 00:12:59.963 ] 00:12:59.963 }' 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.963 04:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.534 04:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:00.534 04:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:00.534 04:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.534 04:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.534 [2024-11-21 04:11:00.264819] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:00.534 04:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.534 04:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:00.534 04:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.534 04:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.534 04:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.534 04:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:00.534 04:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.534 04:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:00.534 04:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:00.534 04:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:00.534 04:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:00.534 04:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:00.534 04:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:00.534 04:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:00.534 04:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:00.534 04:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:00.534 04:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:00.534 04:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:00.534 04:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:00.534 04:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:00.534 04:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:00.794 [2024-11-21 04:11:00.508159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:13:00.794 /dev/nbd0 00:13:00.794 04:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:00.794 04:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:00.794 04:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:00.794 04:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:00.794 04:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:00.794 04:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:00.794 04:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:00.794 04:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:00.794 04:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:00.794 04:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:00.794 04:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.794 1+0 records in 00:13:00.794 1+0 records out 00:13:00.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489098 s, 8.4 MB/s 00:13:00.794 04:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.794 04:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:00.794 04:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.794 04:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:00.794 04:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:00.794 04:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.794 04:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:00.794 04:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:00.794 04:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:00.794 04:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:06.116 63488+0 records in 00:13:06.116 63488+0 records out 00:13:06.116 32505856 bytes (33 MB, 31 MiB) copied, 5.30189 s, 6.1 MB/s 00:13:06.116 04:11:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:06.116 04:11:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:06.116 04:11:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:06.116 04:11:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:06.116 04:11:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:06.116 04:11:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.116 04:11:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:06.376 [2024-11-21 04:11:06.089731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.376 [2024-11-21 04:11:06.124130] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.376 04:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.376 "name": "raid_bdev1", 00:13:06.376 "uuid": "16289533-7ae1-4ed7-b6b6-57e627dc4fd6", 00:13:06.376 "strip_size_kb": 0, 00:13:06.376 "state": "online", 00:13:06.376 "raid_level": "raid1", 00:13:06.376 "superblock": true, 00:13:06.376 "num_base_bdevs": 4, 00:13:06.376 "num_base_bdevs_discovered": 3, 00:13:06.376 "num_base_bdevs_operational": 3, 00:13:06.376 "base_bdevs_list": [ 00:13:06.376 { 00:13:06.376 "name": null, 00:13:06.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.376 "is_configured": false, 00:13:06.376 "data_offset": 0, 00:13:06.376 "data_size": 63488 00:13:06.376 }, 00:13:06.376 { 00:13:06.376 "name": "BaseBdev2", 00:13:06.377 "uuid": "71ae4774-3bfc-554c-b99d-7142259ead3d", 00:13:06.377 "is_configured": true, 00:13:06.377 "data_offset": 2048, 00:13:06.377 "data_size": 63488 00:13:06.377 }, 00:13:06.377 { 00:13:06.377 "name": "BaseBdev3", 00:13:06.377 "uuid": "14e4700d-aea7-5558-b677-38534c13fa80", 00:13:06.377 "is_configured": true, 00:13:06.377 "data_offset": 2048, 00:13:06.377 "data_size": 63488 00:13:06.377 }, 00:13:06.377 { 00:13:06.377 "name": "BaseBdev4", 00:13:06.377 "uuid": "87d8d179-a96f-55b7-8c56-fd01f6245b96", 00:13:06.377 "is_configured": true, 00:13:06.377 "data_offset": 2048, 00:13:06.377 "data_size": 63488 00:13:06.377 } 00:13:06.377 ] 00:13:06.377 }' 00:13:06.377 04:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.377 04:11:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.636 04:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:06.636 04:11:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.636 04:11:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.636 [2024-11-21 04:11:06.607265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:06.896 [2024-11-21 04:11:06.614588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e420 00:13:06.896 04:11:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.896 04:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:06.896 [2024-11-21 04:11:06.616993] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:07.836 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.837 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.837 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.837 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.837 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.837 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.837 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.837 04:11:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.837 04:11:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.837 04:11:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.837 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.837 "name": "raid_bdev1", 00:13:07.837 "uuid": "16289533-7ae1-4ed7-b6b6-57e627dc4fd6", 00:13:07.837 "strip_size_kb": 0, 00:13:07.837 "state": "online", 00:13:07.837 "raid_level": "raid1", 00:13:07.837 "superblock": true, 00:13:07.837 "num_base_bdevs": 4, 00:13:07.837 "num_base_bdevs_discovered": 4, 00:13:07.837 "num_base_bdevs_operational": 4, 00:13:07.837 "process": { 00:13:07.837 "type": "rebuild", 00:13:07.837 "target": "spare", 00:13:07.837 "progress": { 00:13:07.837 "blocks": 20480, 00:13:07.837 "percent": 32 00:13:07.837 } 00:13:07.837 }, 00:13:07.837 "base_bdevs_list": [ 00:13:07.837 { 00:13:07.837 "name": "spare", 00:13:07.837 "uuid": "f7865591-2cd8-5f05-ae6d-cf520f412177", 00:13:07.837 "is_configured": true, 00:13:07.837 "data_offset": 2048, 00:13:07.837 "data_size": 63488 00:13:07.837 }, 00:13:07.837 { 00:13:07.837 "name": "BaseBdev2", 00:13:07.837 "uuid": "71ae4774-3bfc-554c-b99d-7142259ead3d", 00:13:07.837 "is_configured": true, 00:13:07.837 "data_offset": 2048, 00:13:07.837 "data_size": 63488 00:13:07.837 }, 00:13:07.837 { 00:13:07.837 "name": "BaseBdev3", 00:13:07.837 "uuid": "14e4700d-aea7-5558-b677-38534c13fa80", 00:13:07.837 "is_configured": true, 00:13:07.837 "data_offset": 2048, 00:13:07.837 "data_size": 63488 00:13:07.837 }, 00:13:07.837 { 00:13:07.837 "name": "BaseBdev4", 00:13:07.837 "uuid": "87d8d179-a96f-55b7-8c56-fd01f6245b96", 00:13:07.837 "is_configured": true, 00:13:07.837 "data_offset": 2048, 00:13:07.837 "data_size": 63488 00:13:07.837 } 00:13:07.837 ] 00:13:07.837 }' 00:13:07.837 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.837 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.837 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.837 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.837 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:07.837 04:11:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.837 04:11:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.837 [2024-11-21 04:11:07.772672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:08.097 [2024-11-21 04:11:07.825409] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:08.097 [2024-11-21 04:11:07.825547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.097 [2024-11-21 04:11:07.825620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:08.097 [2024-11-21 04:11:07.825643] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:08.097 04:11:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.097 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:08.097 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.097 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.097 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.097 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.097 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:08.097 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.097 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.097 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.097 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.097 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.097 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.097 04:11:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.097 04:11:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.097 04:11:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.097 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.097 "name": "raid_bdev1", 00:13:08.097 "uuid": "16289533-7ae1-4ed7-b6b6-57e627dc4fd6", 00:13:08.097 "strip_size_kb": 0, 00:13:08.097 "state": "online", 00:13:08.097 "raid_level": "raid1", 00:13:08.097 "superblock": true, 00:13:08.097 "num_base_bdevs": 4, 00:13:08.097 "num_base_bdevs_discovered": 3, 00:13:08.097 "num_base_bdevs_operational": 3, 00:13:08.097 "base_bdevs_list": [ 00:13:08.097 { 00:13:08.097 "name": null, 00:13:08.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.097 "is_configured": false, 00:13:08.097 "data_offset": 0, 00:13:08.097 "data_size": 63488 00:13:08.097 }, 00:13:08.097 { 00:13:08.097 "name": "BaseBdev2", 00:13:08.097 "uuid": "71ae4774-3bfc-554c-b99d-7142259ead3d", 00:13:08.097 "is_configured": true, 00:13:08.097 "data_offset": 2048, 00:13:08.098 "data_size": 63488 00:13:08.098 }, 00:13:08.098 { 00:13:08.098 "name": "BaseBdev3", 00:13:08.098 "uuid": "14e4700d-aea7-5558-b677-38534c13fa80", 00:13:08.098 "is_configured": true, 00:13:08.098 "data_offset": 2048, 00:13:08.098 "data_size": 63488 00:13:08.098 }, 00:13:08.098 { 00:13:08.098 "name": "BaseBdev4", 00:13:08.098 "uuid": "87d8d179-a96f-55b7-8c56-fd01f6245b96", 00:13:08.098 "is_configured": true, 00:13:08.098 "data_offset": 2048, 00:13:08.098 "data_size": 63488 00:13:08.098 } 00:13:08.098 ] 00:13:08.098 }' 00:13:08.098 04:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.098 04:11:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.357 04:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:08.357 04:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.357 04:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:08.357 04:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:08.357 04:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.357 04:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.357 04:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.357 04:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.357 04:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.357 04:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.357 04:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.357 "name": "raid_bdev1", 00:13:08.357 "uuid": "16289533-7ae1-4ed7-b6b6-57e627dc4fd6", 00:13:08.357 "strip_size_kb": 0, 00:13:08.357 "state": "online", 00:13:08.357 "raid_level": "raid1", 00:13:08.357 "superblock": true, 00:13:08.357 "num_base_bdevs": 4, 00:13:08.357 "num_base_bdevs_discovered": 3, 00:13:08.357 "num_base_bdevs_operational": 3, 00:13:08.357 "base_bdevs_list": [ 00:13:08.357 { 00:13:08.357 "name": null, 00:13:08.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.357 "is_configured": false, 00:13:08.357 "data_offset": 0, 00:13:08.357 "data_size": 63488 00:13:08.357 }, 00:13:08.357 { 00:13:08.357 "name": "BaseBdev2", 00:13:08.357 "uuid": "71ae4774-3bfc-554c-b99d-7142259ead3d", 00:13:08.357 "is_configured": true, 00:13:08.357 "data_offset": 2048, 00:13:08.357 "data_size": 63488 00:13:08.357 }, 00:13:08.357 { 00:13:08.357 "name": "BaseBdev3", 00:13:08.357 "uuid": "14e4700d-aea7-5558-b677-38534c13fa80", 00:13:08.357 "is_configured": true, 00:13:08.357 "data_offset": 2048, 00:13:08.357 "data_size": 63488 00:13:08.357 }, 00:13:08.357 { 00:13:08.357 "name": "BaseBdev4", 00:13:08.357 "uuid": "87d8d179-a96f-55b7-8c56-fd01f6245b96", 00:13:08.357 "is_configured": true, 00:13:08.357 "data_offset": 2048, 00:13:08.357 "data_size": 63488 00:13:08.357 } 00:13:08.357 ] 00:13:08.357 }' 00:13:08.617 04:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.617 04:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:08.617 04:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.617 04:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:08.617 04:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:08.617 04:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.617 04:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.617 [2024-11-21 04:11:08.428649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:08.617 [2024-11-21 04:11:08.435442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e4f0 00:13:08.617 04:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.617 04:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:08.617 [2024-11-21 04:11:08.437735] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:09.557 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:09.557 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.557 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:09.557 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:09.557 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.557 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.557 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.557 04:11:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.557 04:11:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.557 04:11:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.557 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.557 "name": "raid_bdev1", 00:13:09.557 "uuid": "16289533-7ae1-4ed7-b6b6-57e627dc4fd6", 00:13:09.557 "strip_size_kb": 0, 00:13:09.557 "state": "online", 00:13:09.557 "raid_level": "raid1", 00:13:09.557 "superblock": true, 00:13:09.557 "num_base_bdevs": 4, 00:13:09.557 "num_base_bdevs_discovered": 4, 00:13:09.557 "num_base_bdevs_operational": 4, 00:13:09.557 "process": { 00:13:09.557 "type": "rebuild", 00:13:09.557 "target": "spare", 00:13:09.557 "progress": { 00:13:09.557 "blocks": 20480, 00:13:09.557 "percent": 32 00:13:09.557 } 00:13:09.557 }, 00:13:09.557 "base_bdevs_list": [ 00:13:09.557 { 00:13:09.557 "name": "spare", 00:13:09.557 "uuid": "f7865591-2cd8-5f05-ae6d-cf520f412177", 00:13:09.557 "is_configured": true, 00:13:09.557 "data_offset": 2048, 00:13:09.557 "data_size": 63488 00:13:09.557 }, 00:13:09.557 { 00:13:09.557 "name": "BaseBdev2", 00:13:09.557 "uuid": "71ae4774-3bfc-554c-b99d-7142259ead3d", 00:13:09.557 "is_configured": true, 00:13:09.557 "data_offset": 2048, 00:13:09.557 "data_size": 63488 00:13:09.557 }, 00:13:09.557 { 00:13:09.557 "name": "BaseBdev3", 00:13:09.557 "uuid": "14e4700d-aea7-5558-b677-38534c13fa80", 00:13:09.557 "is_configured": true, 00:13:09.557 "data_offset": 2048, 00:13:09.557 "data_size": 63488 00:13:09.557 }, 00:13:09.557 { 00:13:09.557 "name": "BaseBdev4", 00:13:09.557 "uuid": "87d8d179-a96f-55b7-8c56-fd01f6245b96", 00:13:09.557 "is_configured": true, 00:13:09.557 "data_offset": 2048, 00:13:09.557 "data_size": 63488 00:13:09.557 } 00:13:09.557 ] 00:13:09.557 }' 00:13:09.557 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.817 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:09.817 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.817 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:09.817 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:09.817 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:09.817 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:09.817 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:09.817 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:09.817 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:09.817 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:09.817 04:11:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.817 04:11:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.817 [2024-11-21 04:11:09.581786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:09.817 [2024-11-21 04:11:09.745309] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000c3e4f0 00:13:09.817 04:11:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.817 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:09.817 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:09.817 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:09.817 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.817 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:09.817 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:09.817 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.817 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.817 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.817 04:11:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.817 04:11:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.817 04:11:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.817 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.817 "name": "raid_bdev1", 00:13:09.817 "uuid": "16289533-7ae1-4ed7-b6b6-57e627dc4fd6", 00:13:09.817 "strip_size_kb": 0, 00:13:09.817 "state": "online", 00:13:09.817 "raid_level": "raid1", 00:13:09.817 "superblock": true, 00:13:09.817 "num_base_bdevs": 4, 00:13:09.817 "num_base_bdevs_discovered": 3, 00:13:09.817 "num_base_bdevs_operational": 3, 00:13:09.817 "process": { 00:13:09.817 "type": "rebuild", 00:13:09.817 "target": "spare", 00:13:09.817 "progress": { 00:13:09.817 "blocks": 24576, 00:13:09.817 "percent": 38 00:13:09.817 } 00:13:09.817 }, 00:13:09.817 "base_bdevs_list": [ 00:13:09.817 { 00:13:09.817 "name": "spare", 00:13:09.817 "uuid": "f7865591-2cd8-5f05-ae6d-cf520f412177", 00:13:09.817 "is_configured": true, 00:13:09.817 "data_offset": 2048, 00:13:09.817 "data_size": 63488 00:13:09.817 }, 00:13:09.817 { 00:13:09.817 "name": null, 00:13:09.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.817 "is_configured": false, 00:13:09.817 "data_offset": 0, 00:13:09.817 "data_size": 63488 00:13:09.817 }, 00:13:09.817 { 00:13:09.817 "name": "BaseBdev3", 00:13:09.817 "uuid": "14e4700d-aea7-5558-b677-38534c13fa80", 00:13:09.817 "is_configured": true, 00:13:09.817 "data_offset": 2048, 00:13:09.817 "data_size": 63488 00:13:09.817 }, 00:13:09.817 { 00:13:09.817 "name": "BaseBdev4", 00:13:09.817 "uuid": "87d8d179-a96f-55b7-8c56-fd01f6245b96", 00:13:09.817 "is_configured": true, 00:13:09.817 "data_offset": 2048, 00:13:09.817 "data_size": 63488 00:13:09.817 } 00:13:09.817 ] 00:13:09.817 }' 00:13:09.817 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.078 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:10.078 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.078 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:10.078 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=385 00:13:10.078 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:10.078 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.078 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.078 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.078 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.078 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.078 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.078 04:11:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.078 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.078 04:11:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.078 04:11:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.078 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.078 "name": "raid_bdev1", 00:13:10.078 "uuid": "16289533-7ae1-4ed7-b6b6-57e627dc4fd6", 00:13:10.078 "strip_size_kb": 0, 00:13:10.078 "state": "online", 00:13:10.078 "raid_level": "raid1", 00:13:10.078 "superblock": true, 00:13:10.078 "num_base_bdevs": 4, 00:13:10.078 "num_base_bdevs_discovered": 3, 00:13:10.078 "num_base_bdevs_operational": 3, 00:13:10.078 "process": { 00:13:10.078 "type": "rebuild", 00:13:10.078 "target": "spare", 00:13:10.078 "progress": { 00:13:10.078 "blocks": 26624, 00:13:10.078 "percent": 41 00:13:10.078 } 00:13:10.078 }, 00:13:10.078 "base_bdevs_list": [ 00:13:10.078 { 00:13:10.078 "name": "spare", 00:13:10.078 "uuid": "f7865591-2cd8-5f05-ae6d-cf520f412177", 00:13:10.078 "is_configured": true, 00:13:10.078 "data_offset": 2048, 00:13:10.078 "data_size": 63488 00:13:10.078 }, 00:13:10.078 { 00:13:10.078 "name": null, 00:13:10.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.078 "is_configured": false, 00:13:10.078 "data_offset": 0, 00:13:10.078 "data_size": 63488 00:13:10.078 }, 00:13:10.078 { 00:13:10.078 "name": "BaseBdev3", 00:13:10.078 "uuid": "14e4700d-aea7-5558-b677-38534c13fa80", 00:13:10.078 "is_configured": true, 00:13:10.078 "data_offset": 2048, 00:13:10.078 "data_size": 63488 00:13:10.078 }, 00:13:10.078 { 00:13:10.078 "name": "BaseBdev4", 00:13:10.078 "uuid": "87d8d179-a96f-55b7-8c56-fd01f6245b96", 00:13:10.078 "is_configured": true, 00:13:10.078 "data_offset": 2048, 00:13:10.078 "data_size": 63488 00:13:10.078 } 00:13:10.078 ] 00:13:10.078 }' 00:13:10.078 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.078 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:10.078 04:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.078 04:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:10.078 04:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:11.460 04:11:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:11.460 04:11:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:11.460 04:11:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.460 04:11:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:11.460 04:11:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:11.460 04:11:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.460 04:11:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.460 04:11:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.460 04:11:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.460 04:11:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.460 04:11:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.460 04:11:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.460 "name": "raid_bdev1", 00:13:11.460 "uuid": "16289533-7ae1-4ed7-b6b6-57e627dc4fd6", 00:13:11.460 "strip_size_kb": 0, 00:13:11.460 "state": "online", 00:13:11.460 "raid_level": "raid1", 00:13:11.460 "superblock": true, 00:13:11.460 "num_base_bdevs": 4, 00:13:11.460 "num_base_bdevs_discovered": 3, 00:13:11.460 "num_base_bdevs_operational": 3, 00:13:11.460 "process": { 00:13:11.460 "type": "rebuild", 00:13:11.460 "target": "spare", 00:13:11.460 "progress": { 00:13:11.460 "blocks": 49152, 00:13:11.460 "percent": 77 00:13:11.460 } 00:13:11.460 }, 00:13:11.460 "base_bdevs_list": [ 00:13:11.460 { 00:13:11.460 "name": "spare", 00:13:11.460 "uuid": "f7865591-2cd8-5f05-ae6d-cf520f412177", 00:13:11.460 "is_configured": true, 00:13:11.460 "data_offset": 2048, 00:13:11.460 "data_size": 63488 00:13:11.460 }, 00:13:11.460 { 00:13:11.460 "name": null, 00:13:11.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.460 "is_configured": false, 00:13:11.460 "data_offset": 0, 00:13:11.460 "data_size": 63488 00:13:11.460 }, 00:13:11.460 { 00:13:11.460 "name": "BaseBdev3", 00:13:11.460 "uuid": "14e4700d-aea7-5558-b677-38534c13fa80", 00:13:11.460 "is_configured": true, 00:13:11.460 "data_offset": 2048, 00:13:11.460 "data_size": 63488 00:13:11.460 }, 00:13:11.460 { 00:13:11.460 "name": "BaseBdev4", 00:13:11.460 "uuid": "87d8d179-a96f-55b7-8c56-fd01f6245b96", 00:13:11.460 "is_configured": true, 00:13:11.460 "data_offset": 2048, 00:13:11.460 "data_size": 63488 00:13:11.460 } 00:13:11.460 ] 00:13:11.460 }' 00:13:11.460 04:11:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.460 04:11:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:11.460 04:11:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.460 04:11:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:11.460 04:11:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:11.720 [2024-11-21 04:11:11.657843] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:11.720 [2024-11-21 04:11:11.657983] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:11.720 [2024-11-21 04:11:11.658170] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.289 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:12.289 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.289 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.289 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.289 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.289 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.290 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.290 04:11:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.290 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.290 04:11:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.290 04:11:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.290 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.290 "name": "raid_bdev1", 00:13:12.290 "uuid": "16289533-7ae1-4ed7-b6b6-57e627dc4fd6", 00:13:12.290 "strip_size_kb": 0, 00:13:12.290 "state": "online", 00:13:12.290 "raid_level": "raid1", 00:13:12.290 "superblock": true, 00:13:12.290 "num_base_bdevs": 4, 00:13:12.290 "num_base_bdevs_discovered": 3, 00:13:12.290 "num_base_bdevs_operational": 3, 00:13:12.290 "base_bdevs_list": [ 00:13:12.290 { 00:13:12.290 "name": "spare", 00:13:12.290 "uuid": "f7865591-2cd8-5f05-ae6d-cf520f412177", 00:13:12.290 "is_configured": true, 00:13:12.290 "data_offset": 2048, 00:13:12.290 "data_size": 63488 00:13:12.290 }, 00:13:12.290 { 00:13:12.290 "name": null, 00:13:12.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.290 "is_configured": false, 00:13:12.290 "data_offset": 0, 00:13:12.290 "data_size": 63488 00:13:12.290 }, 00:13:12.290 { 00:13:12.290 "name": "BaseBdev3", 00:13:12.290 "uuid": "14e4700d-aea7-5558-b677-38534c13fa80", 00:13:12.290 "is_configured": true, 00:13:12.290 "data_offset": 2048, 00:13:12.290 "data_size": 63488 00:13:12.290 }, 00:13:12.290 { 00:13:12.290 "name": "BaseBdev4", 00:13:12.290 "uuid": "87d8d179-a96f-55b7-8c56-fd01f6245b96", 00:13:12.290 "is_configured": true, 00:13:12.290 "data_offset": 2048, 00:13:12.290 "data_size": 63488 00:13:12.290 } 00:13:12.290 ] 00:13:12.290 }' 00:13:12.290 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.549 "name": "raid_bdev1", 00:13:12.549 "uuid": "16289533-7ae1-4ed7-b6b6-57e627dc4fd6", 00:13:12.549 "strip_size_kb": 0, 00:13:12.549 "state": "online", 00:13:12.549 "raid_level": "raid1", 00:13:12.549 "superblock": true, 00:13:12.549 "num_base_bdevs": 4, 00:13:12.549 "num_base_bdevs_discovered": 3, 00:13:12.549 "num_base_bdevs_operational": 3, 00:13:12.549 "base_bdevs_list": [ 00:13:12.549 { 00:13:12.549 "name": "spare", 00:13:12.549 "uuid": "f7865591-2cd8-5f05-ae6d-cf520f412177", 00:13:12.549 "is_configured": true, 00:13:12.549 "data_offset": 2048, 00:13:12.549 "data_size": 63488 00:13:12.549 }, 00:13:12.549 { 00:13:12.549 "name": null, 00:13:12.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.549 "is_configured": false, 00:13:12.549 "data_offset": 0, 00:13:12.549 "data_size": 63488 00:13:12.549 }, 00:13:12.549 { 00:13:12.549 "name": "BaseBdev3", 00:13:12.549 "uuid": "14e4700d-aea7-5558-b677-38534c13fa80", 00:13:12.549 "is_configured": true, 00:13:12.549 "data_offset": 2048, 00:13:12.549 "data_size": 63488 00:13:12.549 }, 00:13:12.549 { 00:13:12.549 "name": "BaseBdev4", 00:13:12.549 "uuid": "87d8d179-a96f-55b7-8c56-fd01f6245b96", 00:13:12.549 "is_configured": true, 00:13:12.549 "data_offset": 2048, 00:13:12.549 "data_size": 63488 00:13:12.549 } 00:13:12.549 ] 00:13:12.549 }' 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.549 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.549 "name": "raid_bdev1", 00:13:12.549 "uuid": "16289533-7ae1-4ed7-b6b6-57e627dc4fd6", 00:13:12.549 "strip_size_kb": 0, 00:13:12.549 "state": "online", 00:13:12.549 "raid_level": "raid1", 00:13:12.549 "superblock": true, 00:13:12.549 "num_base_bdevs": 4, 00:13:12.549 "num_base_bdevs_discovered": 3, 00:13:12.549 "num_base_bdevs_operational": 3, 00:13:12.549 "base_bdevs_list": [ 00:13:12.549 { 00:13:12.549 "name": "spare", 00:13:12.549 "uuid": "f7865591-2cd8-5f05-ae6d-cf520f412177", 00:13:12.549 "is_configured": true, 00:13:12.549 "data_offset": 2048, 00:13:12.549 "data_size": 63488 00:13:12.549 }, 00:13:12.549 { 00:13:12.549 "name": null, 00:13:12.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.549 "is_configured": false, 00:13:12.549 "data_offset": 0, 00:13:12.549 "data_size": 63488 00:13:12.549 }, 00:13:12.549 { 00:13:12.549 "name": "BaseBdev3", 00:13:12.549 "uuid": "14e4700d-aea7-5558-b677-38534c13fa80", 00:13:12.549 "is_configured": true, 00:13:12.549 "data_offset": 2048, 00:13:12.549 "data_size": 63488 00:13:12.549 }, 00:13:12.549 { 00:13:12.549 "name": "BaseBdev4", 00:13:12.550 "uuid": "87d8d179-a96f-55b7-8c56-fd01f6245b96", 00:13:12.550 "is_configured": true, 00:13:12.550 "data_offset": 2048, 00:13:12.550 "data_size": 63488 00:13:12.550 } 00:13:12.550 ] 00:13:12.550 }' 00:13:12.550 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.550 04:11:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.117 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:13.117 04:11:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.117 04:11:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.117 [2024-11-21 04:11:12.903504] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:13.117 [2024-11-21 04:11:12.903586] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:13.117 [2024-11-21 04:11:12.903739] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:13.117 [2024-11-21 04:11:12.903885] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:13.117 [2024-11-21 04:11:12.903936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:13.117 04:11:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.117 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.117 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:13.117 04:11:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.117 04:11:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.117 04:11:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.117 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:13.117 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:13.117 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:13.117 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:13.117 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:13.117 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:13.117 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:13.117 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:13.117 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:13.117 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:13.117 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:13.117 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:13.117 04:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:13.377 /dev/nbd0 00:13:13.377 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:13.377 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:13.377 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:13.377 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:13.377 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:13.377 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:13.377 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:13.377 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:13.377 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:13.377 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:13.377 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:13.377 1+0 records in 00:13:13.377 1+0 records out 00:13:13.377 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418249 s, 9.8 MB/s 00:13:13.377 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.377 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:13.377 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.377 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:13.377 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:13.377 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:13.377 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:13.377 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:13.637 /dev/nbd1 00:13:13.637 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:13.637 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:13.637 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:13.637 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:13.637 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:13.637 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:13.637 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:13.637 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:13.637 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:13.637 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:13.637 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:13.637 1+0 records in 00:13:13.637 1+0 records out 00:13:13.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268208 s, 15.3 MB/s 00:13:13.637 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.637 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:13.637 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.638 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:13.638 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:13.638 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:13.638 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:13.638 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:13.638 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:13.638 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:13.638 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:13.638 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:13.638 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:13.638 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.638 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:13.897 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:13.897 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:13.897 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:13.897 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.897 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.897 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:13.897 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:13.897 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.897 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.897 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:14.157 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:14.157 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:14.157 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:14.157 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:14.157 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:14.157 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:14.157 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:14.157 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:14.157 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:14.157 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:14.157 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.157 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.157 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.157 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:14.157 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.157 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.157 [2024-11-21 04:11:13.920627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:14.157 [2024-11-21 04:11:13.920699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.157 [2024-11-21 04:11:13.920722] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:14.157 [2024-11-21 04:11:13.920736] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.157 [2024-11-21 04:11:13.923253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.157 [2024-11-21 04:11:13.923291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:14.157 [2024-11-21 04:11:13.923369] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:14.157 [2024-11-21 04:11:13.923409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:14.157 [2024-11-21 04:11:13.923519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:14.157 [2024-11-21 04:11:13.923607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:14.157 spare 00:13:14.157 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.157 04:11:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:14.158 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.158 04:11:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.158 [2024-11-21 04:11:14.023485] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:13:14.158 [2024-11-21 04:11:14.023561] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:14.158 [2024-11-21 04:11:14.023926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeb00 00:13:14.158 [2024-11-21 04:11:14.024106] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:13:14.158 [2024-11-21 04:11:14.024117] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:13:14.158 [2024-11-21 04:11:14.024272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.158 04:11:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.158 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:14.158 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.158 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.158 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.158 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.158 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.158 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.158 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.158 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.158 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.158 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.158 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.158 04:11:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.158 04:11:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.158 04:11:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.158 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.158 "name": "raid_bdev1", 00:13:14.158 "uuid": "16289533-7ae1-4ed7-b6b6-57e627dc4fd6", 00:13:14.158 "strip_size_kb": 0, 00:13:14.158 "state": "online", 00:13:14.158 "raid_level": "raid1", 00:13:14.158 "superblock": true, 00:13:14.158 "num_base_bdevs": 4, 00:13:14.158 "num_base_bdevs_discovered": 3, 00:13:14.158 "num_base_bdevs_operational": 3, 00:13:14.158 "base_bdevs_list": [ 00:13:14.158 { 00:13:14.158 "name": "spare", 00:13:14.158 "uuid": "f7865591-2cd8-5f05-ae6d-cf520f412177", 00:13:14.158 "is_configured": true, 00:13:14.158 "data_offset": 2048, 00:13:14.158 "data_size": 63488 00:13:14.158 }, 00:13:14.158 { 00:13:14.158 "name": null, 00:13:14.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.158 "is_configured": false, 00:13:14.158 "data_offset": 2048, 00:13:14.158 "data_size": 63488 00:13:14.158 }, 00:13:14.158 { 00:13:14.158 "name": "BaseBdev3", 00:13:14.158 "uuid": "14e4700d-aea7-5558-b677-38534c13fa80", 00:13:14.158 "is_configured": true, 00:13:14.158 "data_offset": 2048, 00:13:14.158 "data_size": 63488 00:13:14.158 }, 00:13:14.158 { 00:13:14.158 "name": "BaseBdev4", 00:13:14.158 "uuid": "87d8d179-a96f-55b7-8c56-fd01f6245b96", 00:13:14.158 "is_configured": true, 00:13:14.158 "data_offset": 2048, 00:13:14.158 "data_size": 63488 00:13:14.158 } 00:13:14.158 ] 00:13:14.158 }' 00:13:14.158 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.158 04:11:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.726 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:14.726 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.726 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:14.726 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:14.726 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.727 "name": "raid_bdev1", 00:13:14.727 "uuid": "16289533-7ae1-4ed7-b6b6-57e627dc4fd6", 00:13:14.727 "strip_size_kb": 0, 00:13:14.727 "state": "online", 00:13:14.727 "raid_level": "raid1", 00:13:14.727 "superblock": true, 00:13:14.727 "num_base_bdevs": 4, 00:13:14.727 "num_base_bdevs_discovered": 3, 00:13:14.727 "num_base_bdevs_operational": 3, 00:13:14.727 "base_bdevs_list": [ 00:13:14.727 { 00:13:14.727 "name": "spare", 00:13:14.727 "uuid": "f7865591-2cd8-5f05-ae6d-cf520f412177", 00:13:14.727 "is_configured": true, 00:13:14.727 "data_offset": 2048, 00:13:14.727 "data_size": 63488 00:13:14.727 }, 00:13:14.727 { 00:13:14.727 "name": null, 00:13:14.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.727 "is_configured": false, 00:13:14.727 "data_offset": 2048, 00:13:14.727 "data_size": 63488 00:13:14.727 }, 00:13:14.727 { 00:13:14.727 "name": "BaseBdev3", 00:13:14.727 "uuid": "14e4700d-aea7-5558-b677-38534c13fa80", 00:13:14.727 "is_configured": true, 00:13:14.727 "data_offset": 2048, 00:13:14.727 "data_size": 63488 00:13:14.727 }, 00:13:14.727 { 00:13:14.727 "name": "BaseBdev4", 00:13:14.727 "uuid": "87d8d179-a96f-55b7-8c56-fd01f6245b96", 00:13:14.727 "is_configured": true, 00:13:14.727 "data_offset": 2048, 00:13:14.727 "data_size": 63488 00:13:14.727 } 00:13:14.727 ] 00:13:14.727 }' 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.727 [2024-11-21 04:11:14.663452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.727 04:11:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.986 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.986 "name": "raid_bdev1", 00:13:14.986 "uuid": "16289533-7ae1-4ed7-b6b6-57e627dc4fd6", 00:13:14.986 "strip_size_kb": 0, 00:13:14.986 "state": "online", 00:13:14.986 "raid_level": "raid1", 00:13:14.986 "superblock": true, 00:13:14.986 "num_base_bdevs": 4, 00:13:14.986 "num_base_bdevs_discovered": 2, 00:13:14.986 "num_base_bdevs_operational": 2, 00:13:14.986 "base_bdevs_list": [ 00:13:14.986 { 00:13:14.986 "name": null, 00:13:14.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.986 "is_configured": false, 00:13:14.986 "data_offset": 0, 00:13:14.986 "data_size": 63488 00:13:14.986 }, 00:13:14.986 { 00:13:14.986 "name": null, 00:13:14.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.986 "is_configured": false, 00:13:14.986 "data_offset": 2048, 00:13:14.986 "data_size": 63488 00:13:14.986 }, 00:13:14.986 { 00:13:14.986 "name": "BaseBdev3", 00:13:14.986 "uuid": "14e4700d-aea7-5558-b677-38534c13fa80", 00:13:14.986 "is_configured": true, 00:13:14.986 "data_offset": 2048, 00:13:14.986 "data_size": 63488 00:13:14.986 }, 00:13:14.986 { 00:13:14.986 "name": "BaseBdev4", 00:13:14.986 "uuid": "87d8d179-a96f-55b7-8c56-fd01f6245b96", 00:13:14.986 "is_configured": true, 00:13:14.986 "data_offset": 2048, 00:13:14.986 "data_size": 63488 00:13:14.986 } 00:13:14.987 ] 00:13:14.987 }' 00:13:14.987 04:11:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.987 04:11:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.247 04:11:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:15.247 04:11:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.247 04:11:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.247 [2024-11-21 04:11:15.098739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:15.247 [2024-11-21 04:11:15.098947] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:15.247 [2024-11-21 04:11:15.098966] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:15.247 [2024-11-21 04:11:15.099015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:15.247 [2024-11-21 04:11:15.106072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caebd0 00:13:15.247 04:11:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.247 04:11:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:15.247 [2024-11-21 04:11:15.108313] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:16.188 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:16.188 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.188 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:16.188 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:16.188 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.188 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.188 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.188 04:11:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.188 04:11:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.188 04:11:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.448 "name": "raid_bdev1", 00:13:16.448 "uuid": "16289533-7ae1-4ed7-b6b6-57e627dc4fd6", 00:13:16.448 "strip_size_kb": 0, 00:13:16.448 "state": "online", 00:13:16.448 "raid_level": "raid1", 00:13:16.448 "superblock": true, 00:13:16.448 "num_base_bdevs": 4, 00:13:16.448 "num_base_bdevs_discovered": 3, 00:13:16.448 "num_base_bdevs_operational": 3, 00:13:16.448 "process": { 00:13:16.448 "type": "rebuild", 00:13:16.448 "target": "spare", 00:13:16.448 "progress": { 00:13:16.448 "blocks": 20480, 00:13:16.448 "percent": 32 00:13:16.448 } 00:13:16.448 }, 00:13:16.448 "base_bdevs_list": [ 00:13:16.448 { 00:13:16.448 "name": "spare", 00:13:16.448 "uuid": "f7865591-2cd8-5f05-ae6d-cf520f412177", 00:13:16.448 "is_configured": true, 00:13:16.448 "data_offset": 2048, 00:13:16.448 "data_size": 63488 00:13:16.448 }, 00:13:16.448 { 00:13:16.448 "name": null, 00:13:16.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.448 "is_configured": false, 00:13:16.448 "data_offset": 2048, 00:13:16.448 "data_size": 63488 00:13:16.448 }, 00:13:16.448 { 00:13:16.448 "name": "BaseBdev3", 00:13:16.448 "uuid": "14e4700d-aea7-5558-b677-38534c13fa80", 00:13:16.448 "is_configured": true, 00:13:16.448 "data_offset": 2048, 00:13:16.448 "data_size": 63488 00:13:16.448 }, 00:13:16.448 { 00:13:16.448 "name": "BaseBdev4", 00:13:16.448 "uuid": "87d8d179-a96f-55b7-8c56-fd01f6245b96", 00:13:16.448 "is_configured": true, 00:13:16.448 "data_offset": 2048, 00:13:16.448 "data_size": 63488 00:13:16.448 } 00:13:16.448 ] 00:13:16.448 }' 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.448 [2024-11-21 04:11:16.268366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:16.448 [2024-11-21 04:11:16.315966] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:16.448 [2024-11-21 04:11:16.316034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.448 [2024-11-21 04:11:16.316050] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:16.448 [2024-11-21 04:11:16.316059] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.448 "name": "raid_bdev1", 00:13:16.448 "uuid": "16289533-7ae1-4ed7-b6b6-57e627dc4fd6", 00:13:16.448 "strip_size_kb": 0, 00:13:16.448 "state": "online", 00:13:16.448 "raid_level": "raid1", 00:13:16.448 "superblock": true, 00:13:16.448 "num_base_bdevs": 4, 00:13:16.448 "num_base_bdevs_discovered": 2, 00:13:16.448 "num_base_bdevs_operational": 2, 00:13:16.448 "base_bdevs_list": [ 00:13:16.448 { 00:13:16.448 "name": null, 00:13:16.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.448 "is_configured": false, 00:13:16.448 "data_offset": 0, 00:13:16.448 "data_size": 63488 00:13:16.448 }, 00:13:16.448 { 00:13:16.448 "name": null, 00:13:16.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.448 "is_configured": false, 00:13:16.448 "data_offset": 2048, 00:13:16.448 "data_size": 63488 00:13:16.448 }, 00:13:16.448 { 00:13:16.448 "name": "BaseBdev3", 00:13:16.448 "uuid": "14e4700d-aea7-5558-b677-38534c13fa80", 00:13:16.448 "is_configured": true, 00:13:16.448 "data_offset": 2048, 00:13:16.448 "data_size": 63488 00:13:16.448 }, 00:13:16.448 { 00:13:16.448 "name": "BaseBdev4", 00:13:16.448 "uuid": "87d8d179-a96f-55b7-8c56-fd01f6245b96", 00:13:16.448 "is_configured": true, 00:13:16.448 "data_offset": 2048, 00:13:16.448 "data_size": 63488 00:13:16.448 } 00:13:16.448 ] 00:13:16.448 }' 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.448 04:11:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.018 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:17.018 04:11:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.018 04:11:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.018 [2024-11-21 04:11:16.758697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:17.018 [2024-11-21 04:11:16.758833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.018 [2024-11-21 04:11:16.758880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:13:17.018 [2024-11-21 04:11:16.758910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.018 [2024-11-21 04:11:16.759483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.018 [2024-11-21 04:11:16.759547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:17.018 [2024-11-21 04:11:16.759693] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:17.018 [2024-11-21 04:11:16.759739] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:17.018 [2024-11-21 04:11:16.759784] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:17.018 [2024-11-21 04:11:16.759910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:17.018 [2024-11-21 04:11:16.766808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeca0 00:13:17.018 spare 00:13:17.018 04:11:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.018 04:11:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:17.018 [2024-11-21 04:11:16.769091] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:17.958 04:11:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:17.958 04:11:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.958 04:11:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:17.958 04:11:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:17.958 04:11:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.958 04:11:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.958 04:11:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.958 04:11:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.958 04:11:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.958 04:11:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.958 04:11:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.958 "name": "raid_bdev1", 00:13:17.958 "uuid": "16289533-7ae1-4ed7-b6b6-57e627dc4fd6", 00:13:17.958 "strip_size_kb": 0, 00:13:17.958 "state": "online", 00:13:17.958 "raid_level": "raid1", 00:13:17.958 "superblock": true, 00:13:17.958 "num_base_bdevs": 4, 00:13:17.958 "num_base_bdevs_discovered": 3, 00:13:17.958 "num_base_bdevs_operational": 3, 00:13:17.958 "process": { 00:13:17.958 "type": "rebuild", 00:13:17.958 "target": "spare", 00:13:17.958 "progress": { 00:13:17.958 "blocks": 20480, 00:13:17.958 "percent": 32 00:13:17.958 } 00:13:17.958 }, 00:13:17.958 "base_bdevs_list": [ 00:13:17.958 { 00:13:17.958 "name": "spare", 00:13:17.958 "uuid": "f7865591-2cd8-5f05-ae6d-cf520f412177", 00:13:17.958 "is_configured": true, 00:13:17.958 "data_offset": 2048, 00:13:17.958 "data_size": 63488 00:13:17.958 }, 00:13:17.958 { 00:13:17.958 "name": null, 00:13:17.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.958 "is_configured": false, 00:13:17.958 "data_offset": 2048, 00:13:17.958 "data_size": 63488 00:13:17.958 }, 00:13:17.958 { 00:13:17.958 "name": "BaseBdev3", 00:13:17.958 "uuid": "14e4700d-aea7-5558-b677-38534c13fa80", 00:13:17.958 "is_configured": true, 00:13:17.958 "data_offset": 2048, 00:13:17.958 "data_size": 63488 00:13:17.958 }, 00:13:17.958 { 00:13:17.958 "name": "BaseBdev4", 00:13:17.959 "uuid": "87d8d179-a96f-55b7-8c56-fd01f6245b96", 00:13:17.959 "is_configured": true, 00:13:17.959 "data_offset": 2048, 00:13:17.959 "data_size": 63488 00:13:17.959 } 00:13:17.959 ] 00:13:17.959 }' 00:13:17.959 04:11:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.959 04:11:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:17.959 04:11:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.959 04:11:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:17.959 04:11:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:17.959 04:11:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.959 04:11:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.959 [2024-11-21 04:11:17.901205] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:18.218 [2024-11-21 04:11:17.976810] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:18.218 [2024-11-21 04:11:17.976869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.218 [2024-11-21 04:11:17.976888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:18.218 [2024-11-21 04:11:17.976895] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:18.218 04:11:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.218 04:11:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:18.218 04:11:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.218 04:11:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.218 04:11:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.218 04:11:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.218 04:11:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:18.218 04:11:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.218 04:11:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.218 04:11:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.218 04:11:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.218 04:11:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.218 04:11:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.218 04:11:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.218 04:11:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.218 04:11:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.218 04:11:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.218 "name": "raid_bdev1", 00:13:18.218 "uuid": "16289533-7ae1-4ed7-b6b6-57e627dc4fd6", 00:13:18.218 "strip_size_kb": 0, 00:13:18.218 "state": "online", 00:13:18.218 "raid_level": "raid1", 00:13:18.218 "superblock": true, 00:13:18.218 "num_base_bdevs": 4, 00:13:18.218 "num_base_bdevs_discovered": 2, 00:13:18.218 "num_base_bdevs_operational": 2, 00:13:18.218 "base_bdevs_list": [ 00:13:18.218 { 00:13:18.218 "name": null, 00:13:18.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.218 "is_configured": false, 00:13:18.218 "data_offset": 0, 00:13:18.218 "data_size": 63488 00:13:18.218 }, 00:13:18.218 { 00:13:18.218 "name": null, 00:13:18.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.218 "is_configured": false, 00:13:18.218 "data_offset": 2048, 00:13:18.218 "data_size": 63488 00:13:18.218 }, 00:13:18.218 { 00:13:18.218 "name": "BaseBdev3", 00:13:18.218 "uuid": "14e4700d-aea7-5558-b677-38534c13fa80", 00:13:18.218 "is_configured": true, 00:13:18.218 "data_offset": 2048, 00:13:18.218 "data_size": 63488 00:13:18.218 }, 00:13:18.218 { 00:13:18.218 "name": "BaseBdev4", 00:13:18.218 "uuid": "87d8d179-a96f-55b7-8c56-fd01f6245b96", 00:13:18.218 "is_configured": true, 00:13:18.218 "data_offset": 2048, 00:13:18.218 "data_size": 63488 00:13:18.218 } 00:13:18.218 ] 00:13:18.218 }' 00:13:18.218 04:11:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.218 04:11:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.477 04:11:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.477 04:11:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.477 04:11:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.477 04:11:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.477 04:11:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.477 04:11:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.477 04:11:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.477 04:11:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.477 04:11:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.737 04:11:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.737 04:11:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.737 "name": "raid_bdev1", 00:13:18.737 "uuid": "16289533-7ae1-4ed7-b6b6-57e627dc4fd6", 00:13:18.737 "strip_size_kb": 0, 00:13:18.737 "state": "online", 00:13:18.737 "raid_level": "raid1", 00:13:18.737 "superblock": true, 00:13:18.737 "num_base_bdevs": 4, 00:13:18.737 "num_base_bdevs_discovered": 2, 00:13:18.737 "num_base_bdevs_operational": 2, 00:13:18.737 "base_bdevs_list": [ 00:13:18.737 { 00:13:18.737 "name": null, 00:13:18.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.737 "is_configured": false, 00:13:18.737 "data_offset": 0, 00:13:18.737 "data_size": 63488 00:13:18.737 }, 00:13:18.737 { 00:13:18.737 "name": null, 00:13:18.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.737 "is_configured": false, 00:13:18.737 "data_offset": 2048, 00:13:18.737 "data_size": 63488 00:13:18.737 }, 00:13:18.737 { 00:13:18.737 "name": "BaseBdev3", 00:13:18.737 "uuid": "14e4700d-aea7-5558-b677-38534c13fa80", 00:13:18.737 "is_configured": true, 00:13:18.737 "data_offset": 2048, 00:13:18.737 "data_size": 63488 00:13:18.737 }, 00:13:18.737 { 00:13:18.737 "name": "BaseBdev4", 00:13:18.737 "uuid": "87d8d179-a96f-55b7-8c56-fd01f6245b96", 00:13:18.737 "is_configured": true, 00:13:18.737 "data_offset": 2048, 00:13:18.737 "data_size": 63488 00:13:18.737 } 00:13:18.737 ] 00:13:18.737 }' 00:13:18.737 04:11:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.737 04:11:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:18.737 04:11:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.737 04:11:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:18.737 04:11:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:18.737 04:11:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.737 04:11:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.737 04:11:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.737 04:11:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:18.737 04:11:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.737 04:11:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.737 [2024-11-21 04:11:18.599099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:18.737 [2024-11-21 04:11:18.599157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:18.737 [2024-11-21 04:11:18.599184] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:18.737 [2024-11-21 04:11:18.599194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:18.737 [2024-11-21 04:11:18.599723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:18.737 [2024-11-21 04:11:18.599748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:18.737 [2024-11-21 04:11:18.599828] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:18.737 [2024-11-21 04:11:18.599842] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:18.737 [2024-11-21 04:11:18.599856] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:18.737 [2024-11-21 04:11:18.599867] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:18.737 BaseBdev1 00:13:18.737 04:11:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.737 04:11:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:19.676 04:11:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:19.676 04:11:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.676 04:11:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.676 04:11:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.676 04:11:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.676 04:11:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:19.676 04:11:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.676 04:11:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.676 04:11:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.676 04:11:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.676 04:11:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.676 04:11:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.676 04:11:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.676 04:11:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.676 04:11:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.936 04:11:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.936 "name": "raid_bdev1", 00:13:19.936 "uuid": "16289533-7ae1-4ed7-b6b6-57e627dc4fd6", 00:13:19.936 "strip_size_kb": 0, 00:13:19.936 "state": "online", 00:13:19.936 "raid_level": "raid1", 00:13:19.936 "superblock": true, 00:13:19.936 "num_base_bdevs": 4, 00:13:19.936 "num_base_bdevs_discovered": 2, 00:13:19.936 "num_base_bdevs_operational": 2, 00:13:19.936 "base_bdevs_list": [ 00:13:19.936 { 00:13:19.936 "name": null, 00:13:19.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.936 "is_configured": false, 00:13:19.936 "data_offset": 0, 00:13:19.936 "data_size": 63488 00:13:19.936 }, 00:13:19.936 { 00:13:19.936 "name": null, 00:13:19.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.936 "is_configured": false, 00:13:19.936 "data_offset": 2048, 00:13:19.936 "data_size": 63488 00:13:19.936 }, 00:13:19.936 { 00:13:19.936 "name": "BaseBdev3", 00:13:19.936 "uuid": "14e4700d-aea7-5558-b677-38534c13fa80", 00:13:19.936 "is_configured": true, 00:13:19.936 "data_offset": 2048, 00:13:19.936 "data_size": 63488 00:13:19.936 }, 00:13:19.936 { 00:13:19.936 "name": "BaseBdev4", 00:13:19.936 "uuid": "87d8d179-a96f-55b7-8c56-fd01f6245b96", 00:13:19.936 "is_configured": true, 00:13:19.936 "data_offset": 2048, 00:13:19.936 "data_size": 63488 00:13:19.936 } 00:13:19.936 ] 00:13:19.936 }' 00:13:19.936 04:11:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.936 04:11:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.196 04:11:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:20.196 04:11:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.196 04:11:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:20.196 04:11:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:20.196 04:11:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.196 04:11:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.196 04:11:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.196 04:11:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.196 04:11:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.196 04:11:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.196 04:11:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.196 "name": "raid_bdev1", 00:13:20.196 "uuid": "16289533-7ae1-4ed7-b6b6-57e627dc4fd6", 00:13:20.196 "strip_size_kb": 0, 00:13:20.196 "state": "online", 00:13:20.196 "raid_level": "raid1", 00:13:20.196 "superblock": true, 00:13:20.196 "num_base_bdevs": 4, 00:13:20.196 "num_base_bdevs_discovered": 2, 00:13:20.196 "num_base_bdevs_operational": 2, 00:13:20.196 "base_bdevs_list": [ 00:13:20.196 { 00:13:20.196 "name": null, 00:13:20.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.196 "is_configured": false, 00:13:20.196 "data_offset": 0, 00:13:20.196 "data_size": 63488 00:13:20.196 }, 00:13:20.196 { 00:13:20.196 "name": null, 00:13:20.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.196 "is_configured": false, 00:13:20.196 "data_offset": 2048, 00:13:20.196 "data_size": 63488 00:13:20.196 }, 00:13:20.196 { 00:13:20.196 "name": "BaseBdev3", 00:13:20.196 "uuid": "14e4700d-aea7-5558-b677-38534c13fa80", 00:13:20.196 "is_configured": true, 00:13:20.196 "data_offset": 2048, 00:13:20.196 "data_size": 63488 00:13:20.196 }, 00:13:20.196 { 00:13:20.196 "name": "BaseBdev4", 00:13:20.196 "uuid": "87d8d179-a96f-55b7-8c56-fd01f6245b96", 00:13:20.196 "is_configured": true, 00:13:20.196 "data_offset": 2048, 00:13:20.196 "data_size": 63488 00:13:20.196 } 00:13:20.196 ] 00:13:20.196 }' 00:13:20.196 04:11:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.456 04:11:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:20.456 04:11:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.456 04:11:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:20.456 04:11:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:20.456 04:11:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:20.456 04:11:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:20.456 04:11:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:20.456 04:11:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:20.456 04:11:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:20.456 04:11:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:20.457 04:11:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:20.457 04:11:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.457 04:11:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.457 [2024-11-21 04:11:20.236349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:20.457 [2024-11-21 04:11:20.236542] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:20.457 [2024-11-21 04:11:20.236559] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:20.457 request: 00:13:20.457 { 00:13:20.457 "base_bdev": "BaseBdev1", 00:13:20.457 "raid_bdev": "raid_bdev1", 00:13:20.457 "method": "bdev_raid_add_base_bdev", 00:13:20.457 "req_id": 1 00:13:20.457 } 00:13:20.457 Got JSON-RPC error response 00:13:20.457 response: 00:13:20.457 { 00:13:20.457 "code": -22, 00:13:20.457 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:20.457 } 00:13:20.457 04:11:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:20.457 04:11:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:20.457 04:11:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:20.457 04:11:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:20.457 04:11:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:20.457 04:11:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:21.397 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:21.397 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.397 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.397 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.397 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.397 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:21.397 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.397 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.397 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.397 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.397 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.397 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.397 04:11:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.397 04:11:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.397 04:11:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.397 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.397 "name": "raid_bdev1", 00:13:21.397 "uuid": "16289533-7ae1-4ed7-b6b6-57e627dc4fd6", 00:13:21.397 "strip_size_kb": 0, 00:13:21.397 "state": "online", 00:13:21.397 "raid_level": "raid1", 00:13:21.397 "superblock": true, 00:13:21.397 "num_base_bdevs": 4, 00:13:21.397 "num_base_bdevs_discovered": 2, 00:13:21.397 "num_base_bdevs_operational": 2, 00:13:21.397 "base_bdevs_list": [ 00:13:21.397 { 00:13:21.397 "name": null, 00:13:21.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.397 "is_configured": false, 00:13:21.397 "data_offset": 0, 00:13:21.397 "data_size": 63488 00:13:21.397 }, 00:13:21.397 { 00:13:21.397 "name": null, 00:13:21.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.397 "is_configured": false, 00:13:21.397 "data_offset": 2048, 00:13:21.397 "data_size": 63488 00:13:21.397 }, 00:13:21.397 { 00:13:21.397 "name": "BaseBdev3", 00:13:21.397 "uuid": "14e4700d-aea7-5558-b677-38534c13fa80", 00:13:21.397 "is_configured": true, 00:13:21.397 "data_offset": 2048, 00:13:21.397 "data_size": 63488 00:13:21.397 }, 00:13:21.397 { 00:13:21.397 "name": "BaseBdev4", 00:13:21.397 "uuid": "87d8d179-a96f-55b7-8c56-fd01f6245b96", 00:13:21.397 "is_configured": true, 00:13:21.397 "data_offset": 2048, 00:13:21.397 "data_size": 63488 00:13:21.397 } 00:13:21.397 ] 00:13:21.397 }' 00:13:21.397 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.397 04:11:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.966 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:21.966 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.966 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:21.966 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:21.966 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.966 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.966 04:11:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.966 04:11:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.966 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.966 04:11:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.966 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.966 "name": "raid_bdev1", 00:13:21.966 "uuid": "16289533-7ae1-4ed7-b6b6-57e627dc4fd6", 00:13:21.966 "strip_size_kb": 0, 00:13:21.966 "state": "online", 00:13:21.966 "raid_level": "raid1", 00:13:21.966 "superblock": true, 00:13:21.966 "num_base_bdevs": 4, 00:13:21.966 "num_base_bdevs_discovered": 2, 00:13:21.966 "num_base_bdevs_operational": 2, 00:13:21.966 "base_bdevs_list": [ 00:13:21.966 { 00:13:21.966 "name": null, 00:13:21.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.966 "is_configured": false, 00:13:21.966 "data_offset": 0, 00:13:21.966 "data_size": 63488 00:13:21.966 }, 00:13:21.967 { 00:13:21.967 "name": null, 00:13:21.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.967 "is_configured": false, 00:13:21.967 "data_offset": 2048, 00:13:21.967 "data_size": 63488 00:13:21.967 }, 00:13:21.967 { 00:13:21.967 "name": "BaseBdev3", 00:13:21.967 "uuid": "14e4700d-aea7-5558-b677-38534c13fa80", 00:13:21.967 "is_configured": true, 00:13:21.967 "data_offset": 2048, 00:13:21.967 "data_size": 63488 00:13:21.967 }, 00:13:21.967 { 00:13:21.967 "name": "BaseBdev4", 00:13:21.967 "uuid": "87d8d179-a96f-55b7-8c56-fd01f6245b96", 00:13:21.967 "is_configured": true, 00:13:21.967 "data_offset": 2048, 00:13:21.967 "data_size": 63488 00:13:21.967 } 00:13:21.967 ] 00:13:21.967 }' 00:13:21.967 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.967 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:21.967 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.967 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:21.967 04:11:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 88658 00:13:21.967 04:11:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 88658 ']' 00:13:21.967 04:11:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 88658 00:13:21.967 04:11:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:21.967 04:11:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:21.967 04:11:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88658 00:13:21.967 04:11:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:21.967 04:11:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:21.967 04:11:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88658' 00:13:21.967 killing process with pid 88658 00:13:21.967 04:11:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 88658 00:13:21.967 Received shutdown signal, test time was about 60.000000 seconds 00:13:21.967 00:13:21.967 Latency(us) 00:13:21.967 [2024-11-21T04:11:21.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:21.967 [2024-11-21T04:11:21.940Z] =================================================================================================================== 00:13:21.967 [2024-11-21T04:11:21.940Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:21.967 [2024-11-21 04:11:21.830001] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:21.967 04:11:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 88658 00:13:21.967 [2024-11-21 04:11:21.830155] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:21.967 [2024-11-21 04:11:21.830248] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:21.967 [2024-11-21 04:11:21.830262] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:13:21.967 [2024-11-21 04:11:21.923709] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:22.537 00:13:22.537 real 0m23.577s 00:13:22.537 user 0m28.344s 00:13:22.537 sys 0m3.843s 00:13:22.537 ************************************ 00:13:22.537 END TEST raid_rebuild_test_sb 00:13:22.537 ************************************ 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.537 04:11:22 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:22.537 04:11:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:22.537 04:11:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.537 04:11:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:22.537 ************************************ 00:13:22.537 START TEST raid_rebuild_test_io 00:13:22.537 ************************************ 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:22.537 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89401 00:13:22.538 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:22.538 04:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89401 00:13:22.538 04:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 89401 ']' 00:13:22.538 04:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.538 04:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.538 04:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.538 04:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.538 04:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.538 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:22.538 Zero copy mechanism will not be used. 00:13:22.538 [2024-11-21 04:11:22.411359] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:13:22.538 [2024-11-21 04:11:22.411473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89401 ] 00:13:22.798 [2024-11-21 04:11:22.564594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.798 [2024-11-21 04:11:22.603317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.798 [2024-11-21 04:11:22.679564] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.798 [2024-11-21 04:11:22.679606] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.369 BaseBdev1_malloc 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.369 [2024-11-21 04:11:23.241304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:23.369 [2024-11-21 04:11:23.241459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.369 [2024-11-21 04:11:23.241509] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:23.369 [2024-11-21 04:11:23.241564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.369 [2024-11-21 04:11:23.244015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.369 [2024-11-21 04:11:23.244106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:23.369 BaseBdev1 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.369 BaseBdev2_malloc 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.369 [2024-11-21 04:11:23.275878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:23.369 [2024-11-21 04:11:23.275925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.369 [2024-11-21 04:11:23.275961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:23.369 [2024-11-21 04:11:23.275970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.369 [2024-11-21 04:11:23.278445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.369 [2024-11-21 04:11:23.278483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:23.369 BaseBdev2 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.369 BaseBdev3_malloc 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.369 [2024-11-21 04:11:23.310605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:23.369 [2024-11-21 04:11:23.310722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.369 [2024-11-21 04:11:23.310763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:23.369 [2024-11-21 04:11:23.310812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.369 [2024-11-21 04:11:23.313319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.369 [2024-11-21 04:11:23.313403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:23.369 BaseBdev3 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.369 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.630 BaseBdev4_malloc 00:13:23.630 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.630 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:23.630 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.630 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.630 [2024-11-21 04:11:23.353899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:23.630 [2024-11-21 04:11:23.353992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.630 [2024-11-21 04:11:23.354048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:23.630 [2024-11-21 04:11:23.354074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.630 [2024-11-21 04:11:23.356494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.630 [2024-11-21 04:11:23.356584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:23.630 BaseBdev4 00:13:23.630 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.630 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:23.630 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.630 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.630 spare_malloc 00:13:23.630 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.630 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:23.630 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.630 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.630 spare_delay 00:13:23.630 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.630 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:23.630 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.630 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.630 [2024-11-21 04:11:23.400258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:23.630 [2024-11-21 04:11:23.400370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.630 [2024-11-21 04:11:23.400407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:23.630 [2024-11-21 04:11:23.400435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.631 [2024-11-21 04:11:23.402882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.631 [2024-11-21 04:11:23.402918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:23.631 spare 00:13:23.631 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.631 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:23.631 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.631 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.631 [2024-11-21 04:11:23.412306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:23.631 [2024-11-21 04:11:23.414469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:23.631 [2024-11-21 04:11:23.414569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:23.631 [2024-11-21 04:11:23.414649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:23.631 [2024-11-21 04:11:23.414778] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:23.631 [2024-11-21 04:11:23.414824] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:23.631 [2024-11-21 04:11:23.415130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:13:23.631 [2024-11-21 04:11:23.415336] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:23.631 [2024-11-21 04:11:23.415384] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:23.631 [2024-11-21 04:11:23.415582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.631 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.631 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:23.631 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.631 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.631 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.631 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.631 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.631 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.631 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.631 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.631 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.631 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.631 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.631 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.631 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.631 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.631 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.631 "name": "raid_bdev1", 00:13:23.631 "uuid": "cf19f548-17cf-46d7-b815-1dc16b172ed8", 00:13:23.631 "strip_size_kb": 0, 00:13:23.631 "state": "online", 00:13:23.631 "raid_level": "raid1", 00:13:23.631 "superblock": false, 00:13:23.631 "num_base_bdevs": 4, 00:13:23.631 "num_base_bdevs_discovered": 4, 00:13:23.631 "num_base_bdevs_operational": 4, 00:13:23.631 "base_bdevs_list": [ 00:13:23.631 { 00:13:23.631 "name": "BaseBdev1", 00:13:23.631 "uuid": "518dd690-b95a-59a2-a421-7893aac85215", 00:13:23.631 "is_configured": true, 00:13:23.631 "data_offset": 0, 00:13:23.631 "data_size": 65536 00:13:23.631 }, 00:13:23.631 { 00:13:23.631 "name": "BaseBdev2", 00:13:23.631 "uuid": "6bd96373-2e7b-585f-abe3-874952fb20d5", 00:13:23.631 "is_configured": true, 00:13:23.631 "data_offset": 0, 00:13:23.631 "data_size": 65536 00:13:23.631 }, 00:13:23.631 { 00:13:23.631 "name": "BaseBdev3", 00:13:23.631 "uuid": "350535eb-a909-5a4c-a964-4f21444b97b1", 00:13:23.631 "is_configured": true, 00:13:23.631 "data_offset": 0, 00:13:23.631 "data_size": 65536 00:13:23.631 }, 00:13:23.631 { 00:13:23.631 "name": "BaseBdev4", 00:13:23.631 "uuid": "85614a80-fccd-5074-b8b7-03257138bcfa", 00:13:23.631 "is_configured": true, 00:13:23.631 "data_offset": 0, 00:13:23.631 "data_size": 65536 00:13:23.631 } 00:13:23.631 ] 00:13:23.631 }' 00:13:23.631 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.631 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.891 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:23.891 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:23.891 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.891 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.891 [2024-11-21 04:11:23.848314] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.151 [2024-11-21 04:11:23.943761] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.151 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.151 "name": "raid_bdev1", 00:13:24.151 "uuid": "cf19f548-17cf-46d7-b815-1dc16b172ed8", 00:13:24.151 "strip_size_kb": 0, 00:13:24.151 "state": "online", 00:13:24.151 "raid_level": "raid1", 00:13:24.151 "superblock": false, 00:13:24.151 "num_base_bdevs": 4, 00:13:24.151 "num_base_bdevs_discovered": 3, 00:13:24.151 "num_base_bdevs_operational": 3, 00:13:24.151 "base_bdevs_list": [ 00:13:24.151 { 00:13:24.151 "name": null, 00:13:24.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.151 "is_configured": false, 00:13:24.151 "data_offset": 0, 00:13:24.151 "data_size": 65536 00:13:24.151 }, 00:13:24.151 { 00:13:24.151 "name": "BaseBdev2", 00:13:24.151 "uuid": "6bd96373-2e7b-585f-abe3-874952fb20d5", 00:13:24.151 "is_configured": true, 00:13:24.151 "data_offset": 0, 00:13:24.151 "data_size": 65536 00:13:24.151 }, 00:13:24.151 { 00:13:24.151 "name": "BaseBdev3", 00:13:24.151 "uuid": "350535eb-a909-5a4c-a964-4f21444b97b1", 00:13:24.151 "is_configured": true, 00:13:24.151 "data_offset": 0, 00:13:24.151 "data_size": 65536 00:13:24.151 }, 00:13:24.152 { 00:13:24.152 "name": "BaseBdev4", 00:13:24.152 "uuid": "85614a80-fccd-5074-b8b7-03257138bcfa", 00:13:24.152 "is_configured": true, 00:13:24.152 "data_offset": 0, 00:13:24.152 "data_size": 65536 00:13:24.152 } 00:13:24.152 ] 00:13:24.152 }' 00:13:24.152 04:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.152 04:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.152 [2024-11-21 04:11:24.035018] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:13:24.152 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:24.152 Zero copy mechanism will not be used. 00:13:24.152 Running I/O for 60 seconds... 00:13:24.722 04:11:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:24.722 04:11:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.722 04:11:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.722 [2024-11-21 04:11:24.405127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:24.722 04:11:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.722 04:11:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:24.722 [2024-11-21 04:11:24.465720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:13:24.722 [2024-11-21 04:11:24.468158] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:24.722 [2024-11-21 04:11:24.578711] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:24.722 [2024-11-21 04:11:24.579650] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:24.722 [2024-11-21 04:11:24.692862] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:24.981 [2024-11-21 04:11:24.694036] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:25.241 188.00 IOPS, 564.00 MiB/s [2024-11-21T04:11:25.214Z] [2024-11-21 04:11:25.046475] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:25.501 [2024-11-21 04:11:25.282059] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:25.501 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.501 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.501 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.501 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.501 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.501 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.501 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.501 04:11:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.501 04:11:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.761 04:11:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.761 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.761 "name": "raid_bdev1", 00:13:25.761 "uuid": "cf19f548-17cf-46d7-b815-1dc16b172ed8", 00:13:25.761 "strip_size_kb": 0, 00:13:25.761 "state": "online", 00:13:25.761 "raid_level": "raid1", 00:13:25.761 "superblock": false, 00:13:25.761 "num_base_bdevs": 4, 00:13:25.761 "num_base_bdevs_discovered": 4, 00:13:25.761 "num_base_bdevs_operational": 4, 00:13:25.761 "process": { 00:13:25.761 "type": "rebuild", 00:13:25.761 "target": "spare", 00:13:25.761 "progress": { 00:13:25.761 "blocks": 10240, 00:13:25.761 "percent": 15 00:13:25.761 } 00:13:25.761 }, 00:13:25.761 "base_bdevs_list": [ 00:13:25.761 { 00:13:25.761 "name": "spare", 00:13:25.761 "uuid": "c4c44b6e-f5e3-5a71-b9f3-1b57832332b0", 00:13:25.761 "is_configured": true, 00:13:25.761 "data_offset": 0, 00:13:25.761 "data_size": 65536 00:13:25.761 }, 00:13:25.761 { 00:13:25.761 "name": "BaseBdev2", 00:13:25.761 "uuid": "6bd96373-2e7b-585f-abe3-874952fb20d5", 00:13:25.761 "is_configured": true, 00:13:25.761 "data_offset": 0, 00:13:25.761 "data_size": 65536 00:13:25.761 }, 00:13:25.761 { 00:13:25.761 "name": "BaseBdev3", 00:13:25.761 "uuid": "350535eb-a909-5a4c-a964-4f21444b97b1", 00:13:25.761 "is_configured": true, 00:13:25.761 "data_offset": 0, 00:13:25.761 "data_size": 65536 00:13:25.761 }, 00:13:25.761 { 00:13:25.761 "name": "BaseBdev4", 00:13:25.761 "uuid": "85614a80-fccd-5074-b8b7-03257138bcfa", 00:13:25.761 "is_configured": true, 00:13:25.761 "data_offset": 0, 00:13:25.761 "data_size": 65536 00:13:25.761 } 00:13:25.762 ] 00:13:25.762 }' 00:13:25.762 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.762 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.762 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.762 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.762 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:25.762 04:11:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.762 04:11:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.762 [2024-11-21 04:11:25.600521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:25.762 [2024-11-21 04:11:25.612342] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:25.762 [2024-11-21 04:11:25.614272] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:25.762 [2024-11-21 04:11:25.722052] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:26.022 [2024-11-21 04:11:25.734288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.022 [2024-11-21 04:11:25.734337] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:26.022 [2024-11-21 04:11:25.734351] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:26.022 [2024-11-21 04:11:25.760796] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:13:26.022 04:11:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.022 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:26.022 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.022 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.022 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.022 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.022 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.022 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.022 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.022 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.022 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.022 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.022 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.022 04:11:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.022 04:11:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.022 04:11:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.022 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.022 "name": "raid_bdev1", 00:13:26.022 "uuid": "cf19f548-17cf-46d7-b815-1dc16b172ed8", 00:13:26.022 "strip_size_kb": 0, 00:13:26.022 "state": "online", 00:13:26.022 "raid_level": "raid1", 00:13:26.022 "superblock": false, 00:13:26.022 "num_base_bdevs": 4, 00:13:26.022 "num_base_bdevs_discovered": 3, 00:13:26.022 "num_base_bdevs_operational": 3, 00:13:26.022 "base_bdevs_list": [ 00:13:26.022 { 00:13:26.022 "name": null, 00:13:26.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.022 "is_configured": false, 00:13:26.022 "data_offset": 0, 00:13:26.022 "data_size": 65536 00:13:26.022 }, 00:13:26.022 { 00:13:26.022 "name": "BaseBdev2", 00:13:26.022 "uuid": "6bd96373-2e7b-585f-abe3-874952fb20d5", 00:13:26.022 "is_configured": true, 00:13:26.022 "data_offset": 0, 00:13:26.022 "data_size": 65536 00:13:26.022 }, 00:13:26.022 { 00:13:26.022 "name": "BaseBdev3", 00:13:26.022 "uuid": "350535eb-a909-5a4c-a964-4f21444b97b1", 00:13:26.022 "is_configured": true, 00:13:26.022 "data_offset": 0, 00:13:26.022 "data_size": 65536 00:13:26.022 }, 00:13:26.022 { 00:13:26.022 "name": "BaseBdev4", 00:13:26.022 "uuid": "85614a80-fccd-5074-b8b7-03257138bcfa", 00:13:26.022 "is_configured": true, 00:13:26.022 "data_offset": 0, 00:13:26.022 "data_size": 65536 00:13:26.022 } 00:13:26.022 ] 00:13:26.022 }' 00:13:26.022 04:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.022 04:11:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.282 146.00 IOPS, 438.00 MiB/s [2024-11-21T04:11:26.255Z] 04:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:26.282 04:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.282 04:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:26.282 04:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:26.282 04:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.282 04:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.282 04:11:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.282 04:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.282 04:11:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.282 04:11:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.282 04:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.282 "name": "raid_bdev1", 00:13:26.282 "uuid": "cf19f548-17cf-46d7-b815-1dc16b172ed8", 00:13:26.282 "strip_size_kb": 0, 00:13:26.282 "state": "online", 00:13:26.282 "raid_level": "raid1", 00:13:26.282 "superblock": false, 00:13:26.282 "num_base_bdevs": 4, 00:13:26.282 "num_base_bdevs_discovered": 3, 00:13:26.282 "num_base_bdevs_operational": 3, 00:13:26.282 "base_bdevs_list": [ 00:13:26.282 { 00:13:26.282 "name": null, 00:13:26.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.282 "is_configured": false, 00:13:26.282 "data_offset": 0, 00:13:26.282 "data_size": 65536 00:13:26.282 }, 00:13:26.282 { 00:13:26.282 "name": "BaseBdev2", 00:13:26.282 "uuid": "6bd96373-2e7b-585f-abe3-874952fb20d5", 00:13:26.282 "is_configured": true, 00:13:26.282 "data_offset": 0, 00:13:26.282 "data_size": 65536 00:13:26.282 }, 00:13:26.282 { 00:13:26.282 "name": "BaseBdev3", 00:13:26.282 "uuid": "350535eb-a909-5a4c-a964-4f21444b97b1", 00:13:26.282 "is_configured": true, 00:13:26.282 "data_offset": 0, 00:13:26.282 "data_size": 65536 00:13:26.282 }, 00:13:26.282 { 00:13:26.282 "name": "BaseBdev4", 00:13:26.282 "uuid": "85614a80-fccd-5074-b8b7-03257138bcfa", 00:13:26.282 "is_configured": true, 00:13:26.282 "data_offset": 0, 00:13:26.282 "data_size": 65536 00:13:26.282 } 00:13:26.282 ] 00:13:26.282 }' 00:13:26.282 04:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.543 04:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:26.543 04:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.543 04:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:26.543 04:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:26.543 04:11:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.543 04:11:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.543 [2024-11-21 04:11:26.341576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:26.543 04:11:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.543 04:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:26.543 [2024-11-21 04:11:26.401051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:13:26.543 [2024-11-21 04:11:26.403480] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:26.803 [2024-11-21 04:11:26.525644] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:26.803 [2024-11-21 04:11:26.527892] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:26.803 [2024-11-21 04:11:26.738758] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:26.803 [2024-11-21 04:11:26.739722] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:27.373 143.33 IOPS, 430.00 MiB/s [2024-11-21T04:11:27.346Z] [2024-11-21 04:11:27.079950] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:27.373 [2024-11-21 04:11:27.080744] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:27.373 [2024-11-21 04:11:27.300879] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:27.633 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.633 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.633 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.633 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.633 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.633 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.633 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.633 04:11:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.633 04:11:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.633 04:11:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.633 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.633 "name": "raid_bdev1", 00:13:27.633 "uuid": "cf19f548-17cf-46d7-b815-1dc16b172ed8", 00:13:27.633 "strip_size_kb": 0, 00:13:27.633 "state": "online", 00:13:27.633 "raid_level": "raid1", 00:13:27.633 "superblock": false, 00:13:27.633 "num_base_bdevs": 4, 00:13:27.633 "num_base_bdevs_discovered": 4, 00:13:27.633 "num_base_bdevs_operational": 4, 00:13:27.633 "process": { 00:13:27.633 "type": "rebuild", 00:13:27.633 "target": "spare", 00:13:27.633 "progress": { 00:13:27.633 "blocks": 10240, 00:13:27.633 "percent": 15 00:13:27.633 } 00:13:27.633 }, 00:13:27.633 "base_bdevs_list": [ 00:13:27.633 { 00:13:27.633 "name": "spare", 00:13:27.633 "uuid": "c4c44b6e-f5e3-5a71-b9f3-1b57832332b0", 00:13:27.633 "is_configured": true, 00:13:27.633 "data_offset": 0, 00:13:27.633 "data_size": 65536 00:13:27.633 }, 00:13:27.633 { 00:13:27.633 "name": "BaseBdev2", 00:13:27.633 "uuid": "6bd96373-2e7b-585f-abe3-874952fb20d5", 00:13:27.633 "is_configured": true, 00:13:27.633 "data_offset": 0, 00:13:27.633 "data_size": 65536 00:13:27.633 }, 00:13:27.633 { 00:13:27.633 "name": "BaseBdev3", 00:13:27.633 "uuid": "350535eb-a909-5a4c-a964-4f21444b97b1", 00:13:27.633 "is_configured": true, 00:13:27.633 "data_offset": 0, 00:13:27.633 "data_size": 65536 00:13:27.633 }, 00:13:27.633 { 00:13:27.633 "name": "BaseBdev4", 00:13:27.633 "uuid": "85614a80-fccd-5074-b8b7-03257138bcfa", 00:13:27.633 "is_configured": true, 00:13:27.633 "data_offset": 0, 00:13:27.633 "data_size": 65536 00:13:27.633 } 00:13:27.633 ] 00:13:27.633 }' 00:13:27.633 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.633 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.633 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.633 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.633 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:27.633 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:27.633 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:27.633 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:27.633 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:27.633 04:11:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.633 04:11:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.633 [2024-11-21 04:11:27.543958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:27.893 [2024-11-21 04:11:27.634628] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:27.893 [2024-11-21 04:11:27.738016] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:13:27.893 [2024-11-21 04:11:27.738120] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002a10 00:13:27.893 04:11:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.893 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:27.893 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:27.893 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.893 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.893 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.893 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.893 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.893 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.893 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.893 04:11:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.893 04:11:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.893 04:11:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.893 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.893 "name": "raid_bdev1", 00:13:27.893 "uuid": "cf19f548-17cf-46d7-b815-1dc16b172ed8", 00:13:27.893 "strip_size_kb": 0, 00:13:27.893 "state": "online", 00:13:27.893 "raid_level": "raid1", 00:13:27.893 "superblock": false, 00:13:27.893 "num_base_bdevs": 4, 00:13:27.893 "num_base_bdevs_discovered": 3, 00:13:27.893 "num_base_bdevs_operational": 3, 00:13:27.893 "process": { 00:13:27.893 "type": "rebuild", 00:13:27.893 "target": "spare", 00:13:27.893 "progress": { 00:13:27.893 "blocks": 14336, 00:13:27.893 "percent": 21 00:13:27.893 } 00:13:27.893 }, 00:13:27.893 "base_bdevs_list": [ 00:13:27.893 { 00:13:27.893 "name": "spare", 00:13:27.893 "uuid": "c4c44b6e-f5e3-5a71-b9f3-1b57832332b0", 00:13:27.893 "is_configured": true, 00:13:27.893 "data_offset": 0, 00:13:27.893 "data_size": 65536 00:13:27.893 }, 00:13:27.893 { 00:13:27.893 "name": null, 00:13:27.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.893 "is_configured": false, 00:13:27.893 "data_offset": 0, 00:13:27.893 "data_size": 65536 00:13:27.893 }, 00:13:27.893 { 00:13:27.893 "name": "BaseBdev3", 00:13:27.893 "uuid": "350535eb-a909-5a4c-a964-4f21444b97b1", 00:13:27.893 "is_configured": true, 00:13:27.893 "data_offset": 0, 00:13:27.893 "data_size": 65536 00:13:27.893 }, 00:13:27.893 { 00:13:27.893 "name": "BaseBdev4", 00:13:27.893 "uuid": "85614a80-fccd-5074-b8b7-03257138bcfa", 00:13:27.893 "is_configured": true, 00:13:27.893 "data_offset": 0, 00:13:27.893 "data_size": 65536 00:13:27.893 } 00:13:27.893 ] 00:13:27.893 }' 00:13:27.893 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.893 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.893 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.154 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:28.154 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=403 00:13:28.154 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:28.154 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.154 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.154 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.154 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.154 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.154 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.154 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.154 [2024-11-21 04:11:27.893699] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:28.154 04:11:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.154 04:11:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.154 04:11:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.154 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.154 "name": "raid_bdev1", 00:13:28.154 "uuid": "cf19f548-17cf-46d7-b815-1dc16b172ed8", 00:13:28.154 "strip_size_kb": 0, 00:13:28.154 "state": "online", 00:13:28.154 "raid_level": "raid1", 00:13:28.154 "superblock": false, 00:13:28.154 "num_base_bdevs": 4, 00:13:28.154 "num_base_bdevs_discovered": 3, 00:13:28.154 "num_base_bdevs_operational": 3, 00:13:28.154 "process": { 00:13:28.154 "type": "rebuild", 00:13:28.154 "target": "spare", 00:13:28.154 "progress": { 00:13:28.154 "blocks": 16384, 00:13:28.154 "percent": 25 00:13:28.154 } 00:13:28.154 }, 00:13:28.154 "base_bdevs_list": [ 00:13:28.154 { 00:13:28.154 "name": "spare", 00:13:28.154 "uuid": "c4c44b6e-f5e3-5a71-b9f3-1b57832332b0", 00:13:28.154 "is_configured": true, 00:13:28.154 "data_offset": 0, 00:13:28.154 "data_size": 65536 00:13:28.154 }, 00:13:28.154 { 00:13:28.154 "name": null, 00:13:28.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.154 "is_configured": false, 00:13:28.154 "data_offset": 0, 00:13:28.154 "data_size": 65536 00:13:28.154 }, 00:13:28.154 { 00:13:28.154 "name": "BaseBdev3", 00:13:28.154 "uuid": "350535eb-a909-5a4c-a964-4f21444b97b1", 00:13:28.154 "is_configured": true, 00:13:28.154 "data_offset": 0, 00:13:28.154 "data_size": 65536 00:13:28.154 }, 00:13:28.154 { 00:13:28.154 "name": "BaseBdev4", 00:13:28.154 "uuid": "85614a80-fccd-5074-b8b7-03257138bcfa", 00:13:28.154 "is_configured": true, 00:13:28.154 "data_offset": 0, 00:13:28.154 "data_size": 65536 00:13:28.154 } 00:13:28.154 ] 00:13:28.154 }' 00:13:28.154 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.154 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:28.154 04:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.154 04:11:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:28.154 04:11:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:28.154 126.25 IOPS, 378.75 MiB/s [2024-11-21T04:11:28.127Z] [2024-11-21 04:11:28.118048] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:28.154 [2024-11-21 04:11:28.119020] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:28.724 [2024-11-21 04:11:28.693940] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:29.294 04:11:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:29.294 04:11:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.294 112.20 IOPS, 336.60 MiB/s [2024-11-21T04:11:29.267Z] 04:11:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.294 04:11:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.294 04:11:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.294 04:11:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.294 04:11:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.294 04:11:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.294 04:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.294 04:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.294 [2024-11-21 04:11:29.076229] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:29.294 04:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.294 04:11:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.294 "name": "raid_bdev1", 00:13:29.294 "uuid": "cf19f548-17cf-46d7-b815-1dc16b172ed8", 00:13:29.294 "strip_size_kb": 0, 00:13:29.294 "state": "online", 00:13:29.294 "raid_level": "raid1", 00:13:29.294 "superblock": false, 00:13:29.294 "num_base_bdevs": 4, 00:13:29.294 "num_base_bdevs_discovered": 3, 00:13:29.294 "num_base_bdevs_operational": 3, 00:13:29.294 "process": { 00:13:29.294 "type": "rebuild", 00:13:29.294 "target": "spare", 00:13:29.294 "progress": { 00:13:29.294 "blocks": 32768, 00:13:29.294 "percent": 50 00:13:29.294 } 00:13:29.294 }, 00:13:29.294 "base_bdevs_list": [ 00:13:29.294 { 00:13:29.294 "name": "spare", 00:13:29.294 "uuid": "c4c44b6e-f5e3-5a71-b9f3-1b57832332b0", 00:13:29.294 "is_configured": true, 00:13:29.294 "data_offset": 0, 00:13:29.294 "data_size": 65536 00:13:29.294 }, 00:13:29.294 { 00:13:29.294 "name": null, 00:13:29.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.294 "is_configured": false, 00:13:29.294 "data_offset": 0, 00:13:29.294 "data_size": 65536 00:13:29.294 }, 00:13:29.294 { 00:13:29.294 "name": "BaseBdev3", 00:13:29.294 "uuid": "350535eb-a909-5a4c-a964-4f21444b97b1", 00:13:29.294 "is_configured": true, 00:13:29.294 "data_offset": 0, 00:13:29.294 "data_size": 65536 00:13:29.294 }, 00:13:29.294 { 00:13:29.294 "name": "BaseBdev4", 00:13:29.294 "uuid": "85614a80-fccd-5074-b8b7-03257138bcfa", 00:13:29.294 "is_configured": true, 00:13:29.294 "data_offset": 0, 00:13:29.294 "data_size": 65536 00:13:29.294 } 00:13:29.294 ] 00:13:29.294 }' 00:13:29.294 04:11:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.294 04:11:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:29.294 04:11:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.294 04:11:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:29.294 04:11:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:29.865 [2024-11-21 04:11:29.739052] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:30.154 [2024-11-21 04:11:29.967971] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:30.425 102.33 IOPS, 307.00 MiB/s [2024-11-21T04:11:30.398Z] 04:11:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:30.425 04:11:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:30.425 04:11:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.425 04:11:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:30.425 04:11:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:30.425 04:11:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.425 04:11:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.425 04:11:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.425 04:11:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.425 04:11:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.425 04:11:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.425 04:11:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.425 "name": "raid_bdev1", 00:13:30.425 "uuid": "cf19f548-17cf-46d7-b815-1dc16b172ed8", 00:13:30.425 "strip_size_kb": 0, 00:13:30.425 "state": "online", 00:13:30.425 "raid_level": "raid1", 00:13:30.425 "superblock": false, 00:13:30.425 "num_base_bdevs": 4, 00:13:30.425 "num_base_bdevs_discovered": 3, 00:13:30.425 "num_base_bdevs_operational": 3, 00:13:30.425 "process": { 00:13:30.425 "type": "rebuild", 00:13:30.425 "target": "spare", 00:13:30.425 "progress": { 00:13:30.425 "blocks": 53248, 00:13:30.425 "percent": 81 00:13:30.425 } 00:13:30.425 }, 00:13:30.425 "base_bdevs_list": [ 00:13:30.426 { 00:13:30.426 "name": "spare", 00:13:30.426 "uuid": "c4c44b6e-f5e3-5a71-b9f3-1b57832332b0", 00:13:30.426 "is_configured": true, 00:13:30.426 "data_offset": 0, 00:13:30.426 "data_size": 65536 00:13:30.426 }, 00:13:30.426 { 00:13:30.426 "name": null, 00:13:30.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.426 "is_configured": false, 00:13:30.426 "data_offset": 0, 00:13:30.426 "data_size": 65536 00:13:30.426 }, 00:13:30.426 { 00:13:30.426 "name": "BaseBdev3", 00:13:30.426 "uuid": "350535eb-a909-5a4c-a964-4f21444b97b1", 00:13:30.426 "is_configured": true, 00:13:30.426 "data_offset": 0, 00:13:30.426 "data_size": 65536 00:13:30.426 }, 00:13:30.426 { 00:13:30.426 "name": "BaseBdev4", 00:13:30.426 "uuid": "85614a80-fccd-5074-b8b7-03257138bcfa", 00:13:30.426 "is_configured": true, 00:13:30.426 "data_offset": 0, 00:13:30.426 "data_size": 65536 00:13:30.426 } 00:13:30.426 ] 00:13:30.426 }' 00:13:30.426 04:11:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.426 04:11:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:30.426 04:11:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.426 04:11:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:30.426 04:11:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:30.996 [2024-11-21 04:11:30.859409] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:30.996 [2024-11-21 04:11:30.964597] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:31.255 [2024-11-21 04:11:30.970705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.516 94.14 IOPS, 282.43 MiB/s [2024-11-21T04:11:31.489Z] 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:31.516 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.516 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.516 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.516 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.516 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.516 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.516 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.516 04:11:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.516 04:11:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.516 04:11:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.516 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.516 "name": "raid_bdev1", 00:13:31.516 "uuid": "cf19f548-17cf-46d7-b815-1dc16b172ed8", 00:13:31.516 "strip_size_kb": 0, 00:13:31.516 "state": "online", 00:13:31.516 "raid_level": "raid1", 00:13:31.516 "superblock": false, 00:13:31.516 "num_base_bdevs": 4, 00:13:31.516 "num_base_bdevs_discovered": 3, 00:13:31.516 "num_base_bdevs_operational": 3, 00:13:31.516 "base_bdevs_list": [ 00:13:31.516 { 00:13:31.516 "name": "spare", 00:13:31.516 "uuid": "c4c44b6e-f5e3-5a71-b9f3-1b57832332b0", 00:13:31.516 "is_configured": true, 00:13:31.516 "data_offset": 0, 00:13:31.516 "data_size": 65536 00:13:31.516 }, 00:13:31.516 { 00:13:31.516 "name": null, 00:13:31.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.516 "is_configured": false, 00:13:31.516 "data_offset": 0, 00:13:31.516 "data_size": 65536 00:13:31.516 }, 00:13:31.516 { 00:13:31.516 "name": "BaseBdev3", 00:13:31.516 "uuid": "350535eb-a909-5a4c-a964-4f21444b97b1", 00:13:31.516 "is_configured": true, 00:13:31.516 "data_offset": 0, 00:13:31.516 "data_size": 65536 00:13:31.516 }, 00:13:31.516 { 00:13:31.516 "name": "BaseBdev4", 00:13:31.516 "uuid": "85614a80-fccd-5074-b8b7-03257138bcfa", 00:13:31.516 "is_configured": true, 00:13:31.516 "data_offset": 0, 00:13:31.516 "data_size": 65536 00:13:31.516 } 00:13:31.516 ] 00:13:31.516 }' 00:13:31.516 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.516 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:31.516 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.776 "name": "raid_bdev1", 00:13:31.776 "uuid": "cf19f548-17cf-46d7-b815-1dc16b172ed8", 00:13:31.776 "strip_size_kb": 0, 00:13:31.776 "state": "online", 00:13:31.776 "raid_level": "raid1", 00:13:31.776 "superblock": false, 00:13:31.776 "num_base_bdevs": 4, 00:13:31.776 "num_base_bdevs_discovered": 3, 00:13:31.776 "num_base_bdevs_operational": 3, 00:13:31.776 "base_bdevs_list": [ 00:13:31.776 { 00:13:31.776 "name": "spare", 00:13:31.776 "uuid": "c4c44b6e-f5e3-5a71-b9f3-1b57832332b0", 00:13:31.776 "is_configured": true, 00:13:31.776 "data_offset": 0, 00:13:31.776 "data_size": 65536 00:13:31.776 }, 00:13:31.776 { 00:13:31.776 "name": null, 00:13:31.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.776 "is_configured": false, 00:13:31.776 "data_offset": 0, 00:13:31.776 "data_size": 65536 00:13:31.776 }, 00:13:31.776 { 00:13:31.776 "name": "BaseBdev3", 00:13:31.776 "uuid": "350535eb-a909-5a4c-a964-4f21444b97b1", 00:13:31.776 "is_configured": true, 00:13:31.776 "data_offset": 0, 00:13:31.776 "data_size": 65536 00:13:31.776 }, 00:13:31.776 { 00:13:31.776 "name": "BaseBdev4", 00:13:31.776 "uuid": "85614a80-fccd-5074-b8b7-03257138bcfa", 00:13:31.776 "is_configured": true, 00:13:31.776 "data_offset": 0, 00:13:31.776 "data_size": 65536 00:13:31.776 } 00:13:31.776 ] 00:13:31.776 }' 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.776 "name": "raid_bdev1", 00:13:31.776 "uuid": "cf19f548-17cf-46d7-b815-1dc16b172ed8", 00:13:31.776 "strip_size_kb": 0, 00:13:31.776 "state": "online", 00:13:31.776 "raid_level": "raid1", 00:13:31.776 "superblock": false, 00:13:31.776 "num_base_bdevs": 4, 00:13:31.776 "num_base_bdevs_discovered": 3, 00:13:31.776 "num_base_bdevs_operational": 3, 00:13:31.776 "base_bdevs_list": [ 00:13:31.776 { 00:13:31.776 "name": "spare", 00:13:31.776 "uuid": "c4c44b6e-f5e3-5a71-b9f3-1b57832332b0", 00:13:31.776 "is_configured": true, 00:13:31.776 "data_offset": 0, 00:13:31.776 "data_size": 65536 00:13:31.776 }, 00:13:31.776 { 00:13:31.776 "name": null, 00:13:31.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.776 "is_configured": false, 00:13:31.776 "data_offset": 0, 00:13:31.776 "data_size": 65536 00:13:31.776 }, 00:13:31.776 { 00:13:31.776 "name": "BaseBdev3", 00:13:31.776 "uuid": "350535eb-a909-5a4c-a964-4f21444b97b1", 00:13:31.776 "is_configured": true, 00:13:31.776 "data_offset": 0, 00:13:31.776 "data_size": 65536 00:13:31.776 }, 00:13:31.776 { 00:13:31.776 "name": "BaseBdev4", 00:13:31.776 "uuid": "85614a80-fccd-5074-b8b7-03257138bcfa", 00:13:31.776 "is_configured": true, 00:13:31.776 "data_offset": 0, 00:13:31.776 "data_size": 65536 00:13:31.776 } 00:13:31.776 ] 00:13:31.776 }' 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.776 04:11:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.346 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:32.346 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.346 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.346 [2024-11-21 04:11:32.031901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:32.346 [2024-11-21 04:11:32.031943] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:32.346 86.38 IOPS, 259.12 MiB/s 00:13:32.346 Latency(us) 00:13:32.346 [2024-11-21T04:11:32.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:32.346 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:32.346 raid_bdev1 : 8.05 86.09 258.27 0.00 0.00 15945.19 279.03 113099.68 00:13:32.346 [2024-11-21T04:11:32.319Z] =================================================================================================================== 00:13:32.346 [2024-11-21T04:11:32.319Z] Total : 86.09 258.27 0.00 0.00 15945.19 279.03 113099.68 00:13:32.346 [2024-11-21 04:11:32.075718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.346 [2024-11-21 04:11:32.075767] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:32.346 [2024-11-21 04:11:32.075869] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:32.346 [2024-11-21 04:11:32.075884] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:32.346 { 00:13:32.346 "results": [ 00:13:32.346 { 00:13:32.346 "job": "raid_bdev1", 00:13:32.346 "core_mask": "0x1", 00:13:32.346 "workload": "randrw", 00:13:32.346 "percentage": 50, 00:13:32.346 "status": "finished", 00:13:32.346 "queue_depth": 2, 00:13:32.346 "io_size": 3145728, 00:13:32.346 "runtime": 8.049758, 00:13:32.346 "iops": 86.08954455525247, 00:13:32.346 "mibps": 258.2686336657574, 00:13:32.346 "io_failed": 0, 00:13:32.346 "io_timeout": 0, 00:13:32.346 "avg_latency_us": 15945.188024978419, 00:13:32.346 "min_latency_us": 279.0288209606987, 00:13:32.346 "max_latency_us": 113099.68209606987 00:13:32.346 } 00:13:32.346 ], 00:13:32.346 "core_count": 1 00:13:32.346 } 00:13:32.346 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.346 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:32.346 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.346 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.346 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.346 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.346 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:32.346 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:32.346 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:32.346 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:32.346 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:32.346 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:32.346 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:32.346 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:32.346 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:32.346 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:32.346 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:32.346 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:32.346 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:32.346 /dev/nbd0 00:13:32.346 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.606 1+0 records in 00:13:32.606 1+0 records out 00:13:32.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239734 s, 17.1 MB/s 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:32.606 /dev/nbd1 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:32.606 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:32.865 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:32.865 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:32.865 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:32.865 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.865 1+0 records in 00:13:32.865 1+0 records out 00:13:32.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361723 s, 11.3 MB/s 00:13:32.865 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.865 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:32.865 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.865 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:32.865 04:11:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:32.865 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:32.865 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:32.865 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:32.865 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:32.865 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:32.865 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:32.865 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:32.865 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:32.865 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.865 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:33.124 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:33.124 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:33.124 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:33.124 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:33.124 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:33.124 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:33.124 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:33.124 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:33.124 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:33.124 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:33.125 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:33.125 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:33.125 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:33.125 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:33.125 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:33.125 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:33.125 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:33.125 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:33.125 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:33.125 04:11:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:33.383 /dev/nbd1 00:13:33.383 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:33.383 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:33.383 04:11:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:33.383 04:11:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:33.383 04:11:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:33.383 04:11:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:33.383 04:11:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:33.383 04:11:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:33.383 04:11:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:33.383 04:11:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:33.383 04:11:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.383 1+0 records in 00:13:33.383 1+0 records out 00:13:33.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362252 s, 11.3 MB/s 00:13:33.383 04:11:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.384 04:11:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:33.384 04:11:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.384 04:11:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:33.384 04:11:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:33.384 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:33.384 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:33.384 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:33.384 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:33.384 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:33.384 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:33.384 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:33.384 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:33.384 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:33.384 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:33.644 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:33.644 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:33.644 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:33.644 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:33.644 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:33.644 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:33.644 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:33.644 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:33.644 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:33.644 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:33.644 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:33.644 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:33.644 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:33.644 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:33.644 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:33.905 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:33.905 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:33.905 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:33.905 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:33.905 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:33.905 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:33.905 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:33.905 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:33.905 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:33.905 04:11:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 89401 00:13:33.905 04:11:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 89401 ']' 00:13:33.905 04:11:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 89401 00:13:33.905 04:11:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:33.905 04:11:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:33.905 04:11:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89401 00:13:33.905 04:11:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:33.905 04:11:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:33.905 killing process with pid 89401 00:13:33.905 04:11:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89401' 00:13:33.905 04:11:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 89401 00:13:33.905 Received shutdown signal, test time was about 9.661029 seconds 00:13:33.905 00:13:33.905 Latency(us) 00:13:33.905 [2024-11-21T04:11:33.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.905 [2024-11-21T04:11:33.878Z] =================================================================================================================== 00:13:33.905 [2024-11-21T04:11:33.878Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:33.905 [2024-11-21 04:11:33.679921] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:33.905 04:11:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 89401 00:13:33.905 [2024-11-21 04:11:33.767035] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:34.167 04:11:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:34.167 00:13:34.167 real 0m11.777s 00:13:34.167 user 0m15.002s 00:13:34.167 sys 0m1.896s 00:13:34.167 04:11:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.167 04:11:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.167 ************************************ 00:13:34.167 END TEST raid_rebuild_test_io 00:13:34.167 ************************************ 00:13:34.427 04:11:34 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:13:34.427 04:11:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:34.427 04:11:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.427 04:11:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:34.427 ************************************ 00:13:34.427 START TEST raid_rebuild_test_sb_io 00:13:34.427 ************************************ 00:13:34.427 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:13:34.427 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:34.427 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:34.427 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:34.427 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:34.427 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:34.427 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:34.427 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.427 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:34.427 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:34.427 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.427 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:34.427 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:34.427 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.427 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:34.427 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:34.427 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.427 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:34.427 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:34.427 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.428 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:34.428 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:34.428 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:34.428 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:34.428 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:34.428 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:34.428 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:34.428 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:34.428 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:34.428 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:34.428 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:34.428 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89794 00:13:34.428 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89794 00:13:34.428 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:34.428 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 89794 ']' 00:13:34.428 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.428 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:34.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.428 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.428 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:34.428 04:11:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.428 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:34.428 Zero copy mechanism will not be used. 00:13:34.428 [2024-11-21 04:11:34.266616] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:13:34.428 [2024-11-21 04:11:34.266766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89794 ] 00:13:34.688 [2024-11-21 04:11:34.420980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.688 [2024-11-21 04:11:34.461137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.688 [2024-11-21 04:11:34.537551] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.688 [2024-11-21 04:11:34.537592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.258 BaseBdev1_malloc 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.258 [2024-11-21 04:11:35.100640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:35.258 [2024-11-21 04:11:35.100717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.258 [2024-11-21 04:11:35.100749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:35.258 [2024-11-21 04:11:35.100762] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.258 [2024-11-21 04:11:35.103259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.258 [2024-11-21 04:11:35.103295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:35.258 BaseBdev1 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.258 BaseBdev2_malloc 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.258 [2024-11-21 04:11:35.135519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:35.258 [2024-11-21 04:11:35.135578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.258 [2024-11-21 04:11:35.135601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:35.258 [2024-11-21 04:11:35.135610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.258 [2024-11-21 04:11:35.138054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.258 [2024-11-21 04:11:35.138097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:35.258 BaseBdev2 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.258 BaseBdev3_malloc 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.258 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.258 [2024-11-21 04:11:35.170332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:35.259 [2024-11-21 04:11:35.170411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.259 [2024-11-21 04:11:35.170437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:35.259 [2024-11-21 04:11:35.170446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.259 [2024-11-21 04:11:35.172881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.259 [2024-11-21 04:11:35.172917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:35.259 BaseBdev3 00:13:35.259 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.259 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:35.259 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:35.259 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.259 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.259 BaseBdev4_malloc 00:13:35.259 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.259 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:35.259 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.259 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.259 [2024-11-21 04:11:35.216198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:35.259 [2024-11-21 04:11:35.216266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.259 [2024-11-21 04:11:35.216290] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:35.259 [2024-11-21 04:11:35.216299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.259 [2024-11-21 04:11:35.218700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.259 [2024-11-21 04:11:35.218732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:35.259 BaseBdev4 00:13:35.259 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.259 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:35.259 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.259 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.519 spare_malloc 00:13:35.519 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.519 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:35.519 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.519 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.519 spare_delay 00:13:35.519 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.520 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:35.520 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.520 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.520 [2024-11-21 04:11:35.262970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:35.520 [2024-11-21 04:11:35.263028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.520 [2024-11-21 04:11:35.263048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:35.520 [2024-11-21 04:11:35.263057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.520 [2024-11-21 04:11:35.265515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.520 [2024-11-21 04:11:35.265551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:35.520 spare 00:13:35.520 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.520 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:35.520 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.520 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.520 [2024-11-21 04:11:35.275045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:35.520 [2024-11-21 04:11:35.277213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:35.520 [2024-11-21 04:11:35.277291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:35.520 [2024-11-21 04:11:35.277338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:35.520 [2024-11-21 04:11:35.277507] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:35.520 [2024-11-21 04:11:35.277520] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:35.520 [2024-11-21 04:11:35.277797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:13:35.520 [2024-11-21 04:11:35.277965] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:35.520 [2024-11-21 04:11:35.277992] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:35.520 [2024-11-21 04:11:35.278115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.520 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.520 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:35.520 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.520 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.520 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.520 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.520 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.520 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.520 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.520 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.520 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.520 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.520 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.520 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.520 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.520 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.520 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.520 "name": "raid_bdev1", 00:13:35.520 "uuid": "1b015646-7686-4770-b33e-9a6a1bf5c475", 00:13:35.520 "strip_size_kb": 0, 00:13:35.520 "state": "online", 00:13:35.520 "raid_level": "raid1", 00:13:35.520 "superblock": true, 00:13:35.520 "num_base_bdevs": 4, 00:13:35.520 "num_base_bdevs_discovered": 4, 00:13:35.520 "num_base_bdevs_operational": 4, 00:13:35.520 "base_bdevs_list": [ 00:13:35.520 { 00:13:35.520 "name": "BaseBdev1", 00:13:35.520 "uuid": "e5c29e17-69b9-55d7-a7fa-f6401b80e9de", 00:13:35.520 "is_configured": true, 00:13:35.520 "data_offset": 2048, 00:13:35.520 "data_size": 63488 00:13:35.520 }, 00:13:35.520 { 00:13:35.520 "name": "BaseBdev2", 00:13:35.520 "uuid": "5e0326f5-3377-5e48-8ace-e86d9a2974c1", 00:13:35.520 "is_configured": true, 00:13:35.520 "data_offset": 2048, 00:13:35.520 "data_size": 63488 00:13:35.520 }, 00:13:35.520 { 00:13:35.520 "name": "BaseBdev3", 00:13:35.520 "uuid": "d6f6c919-5810-54c5-8aab-9278c3cead9d", 00:13:35.520 "is_configured": true, 00:13:35.520 "data_offset": 2048, 00:13:35.520 "data_size": 63488 00:13:35.520 }, 00:13:35.520 { 00:13:35.520 "name": "BaseBdev4", 00:13:35.520 "uuid": "79d4466a-fd4a-5f36-a745-109ae25edf53", 00:13:35.520 "is_configured": true, 00:13:35.520 "data_offset": 2048, 00:13:35.520 "data_size": 63488 00:13:35.520 } 00:13:35.520 ] 00:13:35.520 }' 00:13:35.520 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.520 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.780 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:35.780 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:35.780 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.780 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.780 [2024-11-21 04:11:35.730605] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.041 [2024-11-21 04:11:35.826016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.041 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.041 "name": "raid_bdev1", 00:13:36.041 "uuid": "1b015646-7686-4770-b33e-9a6a1bf5c475", 00:13:36.041 "strip_size_kb": 0, 00:13:36.041 "state": "online", 00:13:36.041 "raid_level": "raid1", 00:13:36.041 "superblock": true, 00:13:36.041 "num_base_bdevs": 4, 00:13:36.041 "num_base_bdevs_discovered": 3, 00:13:36.041 "num_base_bdevs_operational": 3, 00:13:36.041 "base_bdevs_list": [ 00:13:36.041 { 00:13:36.041 "name": null, 00:13:36.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.042 "is_configured": false, 00:13:36.042 "data_offset": 0, 00:13:36.042 "data_size": 63488 00:13:36.042 }, 00:13:36.042 { 00:13:36.042 "name": "BaseBdev2", 00:13:36.042 "uuid": "5e0326f5-3377-5e48-8ace-e86d9a2974c1", 00:13:36.042 "is_configured": true, 00:13:36.042 "data_offset": 2048, 00:13:36.042 "data_size": 63488 00:13:36.042 }, 00:13:36.042 { 00:13:36.042 "name": "BaseBdev3", 00:13:36.042 "uuid": "d6f6c919-5810-54c5-8aab-9278c3cead9d", 00:13:36.042 "is_configured": true, 00:13:36.042 "data_offset": 2048, 00:13:36.042 "data_size": 63488 00:13:36.042 }, 00:13:36.042 { 00:13:36.042 "name": "BaseBdev4", 00:13:36.042 "uuid": "79d4466a-fd4a-5f36-a745-109ae25edf53", 00:13:36.042 "is_configured": true, 00:13:36.042 "data_offset": 2048, 00:13:36.042 "data_size": 63488 00:13:36.042 } 00:13:36.042 ] 00:13:36.042 }' 00:13:36.042 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.042 04:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.042 [2024-11-21 04:11:35.921296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:13:36.042 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:36.042 Zero copy mechanism will not be used. 00:13:36.042 Running I/O for 60 seconds... 00:13:36.611 04:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:36.611 04:11:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.611 04:11:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.611 [2024-11-21 04:11:36.312133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:36.611 04:11:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.611 04:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:36.611 [2024-11-21 04:11:36.386191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:13:36.611 [2024-11-21 04:11:36.388637] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:36.611 [2024-11-21 04:11:36.510862] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:36.611 [2024-11-21 04:11:36.513055] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:36.870 [2024-11-21 04:11:36.721749] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:36.870 [2024-11-21 04:11:36.722143] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:37.128 142.00 IOPS, 426.00 MiB/s [2024-11-21T04:11:37.101Z] [2024-11-21 04:11:37.061706] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:37.128 [2024-11-21 04:11:37.062629] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:37.387 [2024-11-21 04:11:37.182747] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:37.387 [2024-11-21 04:11:37.183909] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:37.646 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.646 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.646 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.646 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.646 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.646 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.646 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.646 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.646 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.646 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.646 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.646 "name": "raid_bdev1", 00:13:37.646 "uuid": "1b015646-7686-4770-b33e-9a6a1bf5c475", 00:13:37.646 "strip_size_kb": 0, 00:13:37.646 "state": "online", 00:13:37.646 "raid_level": "raid1", 00:13:37.646 "superblock": true, 00:13:37.646 "num_base_bdevs": 4, 00:13:37.646 "num_base_bdevs_discovered": 4, 00:13:37.646 "num_base_bdevs_operational": 4, 00:13:37.646 "process": { 00:13:37.646 "type": "rebuild", 00:13:37.646 "target": "spare", 00:13:37.646 "progress": { 00:13:37.646 "blocks": 10240, 00:13:37.646 "percent": 16 00:13:37.646 } 00:13:37.646 }, 00:13:37.646 "base_bdevs_list": [ 00:13:37.646 { 00:13:37.646 "name": "spare", 00:13:37.646 "uuid": "eff98b02-a460-5034-9992-b4af1d65bf25", 00:13:37.646 "is_configured": true, 00:13:37.646 "data_offset": 2048, 00:13:37.646 "data_size": 63488 00:13:37.646 }, 00:13:37.646 { 00:13:37.646 "name": "BaseBdev2", 00:13:37.646 "uuid": "5e0326f5-3377-5e48-8ace-e86d9a2974c1", 00:13:37.646 "is_configured": true, 00:13:37.646 "data_offset": 2048, 00:13:37.646 "data_size": 63488 00:13:37.646 }, 00:13:37.646 { 00:13:37.646 "name": "BaseBdev3", 00:13:37.646 "uuid": "d6f6c919-5810-54c5-8aab-9278c3cead9d", 00:13:37.646 "is_configured": true, 00:13:37.646 "data_offset": 2048, 00:13:37.646 "data_size": 63488 00:13:37.646 }, 00:13:37.646 { 00:13:37.646 "name": "BaseBdev4", 00:13:37.646 "uuid": "79d4466a-fd4a-5f36-a745-109ae25edf53", 00:13:37.646 "is_configured": true, 00:13:37.646 "data_offset": 2048, 00:13:37.646 "data_size": 63488 00:13:37.646 } 00:13:37.646 ] 00:13:37.646 }' 00:13:37.646 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.646 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.646 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.646 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.646 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:37.646 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.646 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.646 [2024-11-21 04:11:37.530649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.646 [2024-11-21 04:11:37.530793] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:37.905 [2024-11-21 04:11:37.636096] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:37.906 [2024-11-21 04:11:37.655537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.906 [2024-11-21 04:11:37.655601] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.906 [2024-11-21 04:11:37.655621] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:37.906 [2024-11-21 04:11:37.671447] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:13:37.906 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.906 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:37.906 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.906 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.906 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.906 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.906 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.906 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.906 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.906 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.906 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.906 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.906 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.906 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.906 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.906 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.906 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.906 "name": "raid_bdev1", 00:13:37.906 "uuid": "1b015646-7686-4770-b33e-9a6a1bf5c475", 00:13:37.906 "strip_size_kb": 0, 00:13:37.906 "state": "online", 00:13:37.906 "raid_level": "raid1", 00:13:37.906 "superblock": true, 00:13:37.906 "num_base_bdevs": 4, 00:13:37.906 "num_base_bdevs_discovered": 3, 00:13:37.906 "num_base_bdevs_operational": 3, 00:13:37.906 "base_bdevs_list": [ 00:13:37.906 { 00:13:37.906 "name": null, 00:13:37.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.906 "is_configured": false, 00:13:37.906 "data_offset": 0, 00:13:37.906 "data_size": 63488 00:13:37.906 }, 00:13:37.906 { 00:13:37.906 "name": "BaseBdev2", 00:13:37.906 "uuid": "5e0326f5-3377-5e48-8ace-e86d9a2974c1", 00:13:37.906 "is_configured": true, 00:13:37.906 "data_offset": 2048, 00:13:37.906 "data_size": 63488 00:13:37.906 }, 00:13:37.906 { 00:13:37.906 "name": "BaseBdev3", 00:13:37.906 "uuid": "d6f6c919-5810-54c5-8aab-9278c3cead9d", 00:13:37.906 "is_configured": true, 00:13:37.906 "data_offset": 2048, 00:13:37.906 "data_size": 63488 00:13:37.906 }, 00:13:37.906 { 00:13:37.906 "name": "BaseBdev4", 00:13:37.906 "uuid": "79d4466a-fd4a-5f36-a745-109ae25edf53", 00:13:37.906 "is_configured": true, 00:13:37.906 "data_offset": 2048, 00:13:37.906 "data_size": 63488 00:13:37.906 } 00:13:37.906 ] 00:13:37.906 }' 00:13:37.906 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.906 04:11:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.425 139.50 IOPS, 418.50 MiB/s [2024-11-21T04:11:38.398Z] 04:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:38.425 04:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.425 04:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:38.425 04:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:38.425 04:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.425 04:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.425 04:11:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.425 04:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.425 04:11:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.425 04:11:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.425 04:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.425 "name": "raid_bdev1", 00:13:38.425 "uuid": "1b015646-7686-4770-b33e-9a6a1bf5c475", 00:13:38.425 "strip_size_kb": 0, 00:13:38.425 "state": "online", 00:13:38.425 "raid_level": "raid1", 00:13:38.425 "superblock": true, 00:13:38.425 "num_base_bdevs": 4, 00:13:38.425 "num_base_bdevs_discovered": 3, 00:13:38.425 "num_base_bdevs_operational": 3, 00:13:38.425 "base_bdevs_list": [ 00:13:38.425 { 00:13:38.425 "name": null, 00:13:38.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.425 "is_configured": false, 00:13:38.425 "data_offset": 0, 00:13:38.425 "data_size": 63488 00:13:38.425 }, 00:13:38.425 { 00:13:38.425 "name": "BaseBdev2", 00:13:38.425 "uuid": "5e0326f5-3377-5e48-8ace-e86d9a2974c1", 00:13:38.425 "is_configured": true, 00:13:38.425 "data_offset": 2048, 00:13:38.426 "data_size": 63488 00:13:38.426 }, 00:13:38.426 { 00:13:38.426 "name": "BaseBdev3", 00:13:38.426 "uuid": "d6f6c919-5810-54c5-8aab-9278c3cead9d", 00:13:38.426 "is_configured": true, 00:13:38.426 "data_offset": 2048, 00:13:38.426 "data_size": 63488 00:13:38.426 }, 00:13:38.426 { 00:13:38.426 "name": "BaseBdev4", 00:13:38.426 "uuid": "79d4466a-fd4a-5f36-a745-109ae25edf53", 00:13:38.426 "is_configured": true, 00:13:38.426 "data_offset": 2048, 00:13:38.426 "data_size": 63488 00:13:38.426 } 00:13:38.426 ] 00:13:38.426 }' 00:13:38.426 04:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.426 04:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:38.426 04:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.426 04:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:38.426 04:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:38.426 04:11:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.426 04:11:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.426 [2024-11-21 04:11:38.325131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:38.426 04:11:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.426 04:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:38.426 [2024-11-21 04:11:38.392863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:13:38.426 [2024-11-21 04:11:38.395338] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:38.684 [2024-11-21 04:11:38.518820] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:38.684 [2024-11-21 04:11:38.519667] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:38.944 [2024-11-21 04:11:38.741277] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:38.944 [2024-11-21 04:11:38.742490] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:39.204 146.67 IOPS, 440.00 MiB/s [2024-11-21T04:11:39.177Z] [2024-11-21 04:11:39.111838] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:39.204 [2024-11-21 04:11:39.114033] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:39.464 [2024-11-21 04:11:39.353240] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:39.464 [2024-11-21 04:11:39.354433] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:39.464 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.464 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.464 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.464 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.464 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.464 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.464 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.464 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.464 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.464 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.464 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.464 "name": "raid_bdev1", 00:13:39.464 "uuid": "1b015646-7686-4770-b33e-9a6a1bf5c475", 00:13:39.464 "strip_size_kb": 0, 00:13:39.464 "state": "online", 00:13:39.464 "raid_level": "raid1", 00:13:39.464 "superblock": true, 00:13:39.464 "num_base_bdevs": 4, 00:13:39.464 "num_base_bdevs_discovered": 4, 00:13:39.464 "num_base_bdevs_operational": 4, 00:13:39.464 "process": { 00:13:39.464 "type": "rebuild", 00:13:39.464 "target": "spare", 00:13:39.464 "progress": { 00:13:39.464 "blocks": 10240, 00:13:39.464 "percent": 16 00:13:39.464 } 00:13:39.464 }, 00:13:39.464 "base_bdevs_list": [ 00:13:39.464 { 00:13:39.464 "name": "spare", 00:13:39.464 "uuid": "eff98b02-a460-5034-9992-b4af1d65bf25", 00:13:39.464 "is_configured": true, 00:13:39.464 "data_offset": 2048, 00:13:39.464 "data_size": 63488 00:13:39.464 }, 00:13:39.464 { 00:13:39.464 "name": "BaseBdev2", 00:13:39.464 "uuid": "5e0326f5-3377-5e48-8ace-e86d9a2974c1", 00:13:39.464 "is_configured": true, 00:13:39.464 "data_offset": 2048, 00:13:39.464 "data_size": 63488 00:13:39.464 }, 00:13:39.464 { 00:13:39.464 "name": "BaseBdev3", 00:13:39.464 "uuid": "d6f6c919-5810-54c5-8aab-9278c3cead9d", 00:13:39.464 "is_configured": true, 00:13:39.464 "data_offset": 2048, 00:13:39.464 "data_size": 63488 00:13:39.464 }, 00:13:39.464 { 00:13:39.464 "name": "BaseBdev4", 00:13:39.464 "uuid": "79d4466a-fd4a-5f36-a745-109ae25edf53", 00:13:39.464 "is_configured": true, 00:13:39.464 "data_offset": 2048, 00:13:39.464 "data_size": 63488 00:13:39.464 } 00:13:39.464 ] 00:13:39.464 }' 00:13:39.464 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.724 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.724 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.724 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.724 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:39.724 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:39.724 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:39.724 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:39.724 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:39.724 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:39.724 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:39.724 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.724 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.724 [2024-11-21 04:11:39.518440] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:39.724 [2024-11-21 04:11:39.668753] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:13:39.724 [2024-11-21 04:11:39.668789] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002a10 00:13:39.724 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.724 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:39.724 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:39.724 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.724 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.724 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.724 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.724 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.724 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.724 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.724 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.724 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.986 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.986 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.986 "name": "raid_bdev1", 00:13:39.986 "uuid": "1b015646-7686-4770-b33e-9a6a1bf5c475", 00:13:39.986 "strip_size_kb": 0, 00:13:39.986 "state": "online", 00:13:39.986 "raid_level": "raid1", 00:13:39.986 "superblock": true, 00:13:39.986 "num_base_bdevs": 4, 00:13:39.986 "num_base_bdevs_discovered": 3, 00:13:39.986 "num_base_bdevs_operational": 3, 00:13:39.986 "process": { 00:13:39.986 "type": "rebuild", 00:13:39.986 "target": "spare", 00:13:39.986 "progress": { 00:13:39.986 "blocks": 12288, 00:13:39.986 "percent": 19 00:13:39.986 } 00:13:39.986 }, 00:13:39.986 "base_bdevs_list": [ 00:13:39.986 { 00:13:39.986 "name": "spare", 00:13:39.986 "uuid": "eff98b02-a460-5034-9992-b4af1d65bf25", 00:13:39.986 "is_configured": true, 00:13:39.986 "data_offset": 2048, 00:13:39.986 "data_size": 63488 00:13:39.986 }, 00:13:39.986 { 00:13:39.986 "name": null, 00:13:39.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.986 "is_configured": false, 00:13:39.986 "data_offset": 0, 00:13:39.986 "data_size": 63488 00:13:39.986 }, 00:13:39.986 { 00:13:39.986 "name": "BaseBdev3", 00:13:39.986 "uuid": "d6f6c919-5810-54c5-8aab-9278c3cead9d", 00:13:39.986 "is_configured": true, 00:13:39.986 "data_offset": 2048, 00:13:39.986 "data_size": 63488 00:13:39.986 }, 00:13:39.986 { 00:13:39.986 "name": "BaseBdev4", 00:13:39.986 "uuid": "79d4466a-fd4a-5f36-a745-109ae25edf53", 00:13:39.986 "is_configured": true, 00:13:39.986 "data_offset": 2048, 00:13:39.986 "data_size": 63488 00:13:39.986 } 00:13:39.986 ] 00:13:39.986 }' 00:13:39.986 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.986 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.986 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.986 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.986 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=415 00:13:39.986 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:39.986 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.986 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.986 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.986 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.986 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.986 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.986 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.986 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.986 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.986 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.986 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.986 "name": "raid_bdev1", 00:13:39.986 "uuid": "1b015646-7686-4770-b33e-9a6a1bf5c475", 00:13:39.986 "strip_size_kb": 0, 00:13:39.986 "state": "online", 00:13:39.986 "raid_level": "raid1", 00:13:39.986 "superblock": true, 00:13:39.986 "num_base_bdevs": 4, 00:13:39.986 "num_base_bdevs_discovered": 3, 00:13:39.986 "num_base_bdevs_operational": 3, 00:13:39.986 "process": { 00:13:39.986 "type": "rebuild", 00:13:39.986 "target": "spare", 00:13:39.986 "progress": { 00:13:39.986 "blocks": 12288, 00:13:39.986 "percent": 19 00:13:39.986 } 00:13:39.986 }, 00:13:39.986 "base_bdevs_list": [ 00:13:39.986 { 00:13:39.986 "name": "spare", 00:13:39.986 "uuid": "eff98b02-a460-5034-9992-b4af1d65bf25", 00:13:39.986 "is_configured": true, 00:13:39.986 "data_offset": 2048, 00:13:39.986 "data_size": 63488 00:13:39.986 }, 00:13:39.986 { 00:13:39.986 "name": null, 00:13:39.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.986 "is_configured": false, 00:13:39.986 "data_offset": 0, 00:13:39.986 "data_size": 63488 00:13:39.986 }, 00:13:39.986 { 00:13:39.986 "name": "BaseBdev3", 00:13:39.986 "uuid": "d6f6c919-5810-54c5-8aab-9278c3cead9d", 00:13:39.986 "is_configured": true, 00:13:39.986 "data_offset": 2048, 00:13:39.986 "data_size": 63488 00:13:39.986 }, 00:13:39.986 { 00:13:39.986 "name": "BaseBdev4", 00:13:39.986 "uuid": "79d4466a-fd4a-5f36-a745-109ae25edf53", 00:13:39.986 "is_configured": true, 00:13:39.986 "data_offset": 2048, 00:13:39.986 "data_size": 63488 00:13:39.986 } 00:13:39.986 ] 00:13:39.986 }' 00:13:39.986 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.986 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.986 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.246 127.00 IOPS, 381.00 MiB/s [2024-11-21T04:11:40.219Z] 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.246 04:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:40.246 [2024-11-21 04:11:40.181657] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:40.506 [2024-11-21 04:11:40.303031] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:40.766 [2024-11-21 04:11:40.627942] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:41.025 [2024-11-21 04:11:40.846057] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:41.025 113.60 IOPS, 340.80 MiB/s [2024-11-21T04:11:40.999Z] 04:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:41.026 04:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.026 04:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.026 04:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.026 04:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.026 04:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.026 04:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.026 04:11:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.026 04:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.026 04:11:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.308 04:11:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.308 04:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.308 "name": "raid_bdev1", 00:13:41.308 "uuid": "1b015646-7686-4770-b33e-9a6a1bf5c475", 00:13:41.308 "strip_size_kb": 0, 00:13:41.308 "state": "online", 00:13:41.308 "raid_level": "raid1", 00:13:41.308 "superblock": true, 00:13:41.308 "num_base_bdevs": 4, 00:13:41.308 "num_base_bdevs_discovered": 3, 00:13:41.308 "num_base_bdevs_operational": 3, 00:13:41.308 "process": { 00:13:41.308 "type": "rebuild", 00:13:41.308 "target": "spare", 00:13:41.308 "progress": { 00:13:41.308 "blocks": 30720, 00:13:41.308 "percent": 48 00:13:41.308 } 00:13:41.308 }, 00:13:41.308 "base_bdevs_list": [ 00:13:41.308 { 00:13:41.308 "name": "spare", 00:13:41.308 "uuid": "eff98b02-a460-5034-9992-b4af1d65bf25", 00:13:41.308 "is_configured": true, 00:13:41.308 "data_offset": 2048, 00:13:41.308 "data_size": 63488 00:13:41.308 }, 00:13:41.308 { 00:13:41.308 "name": null, 00:13:41.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.308 "is_configured": false, 00:13:41.308 "data_offset": 0, 00:13:41.308 "data_size": 63488 00:13:41.308 }, 00:13:41.308 { 00:13:41.308 "name": "BaseBdev3", 00:13:41.308 "uuid": "d6f6c919-5810-54c5-8aab-9278c3cead9d", 00:13:41.308 "is_configured": true, 00:13:41.308 "data_offset": 2048, 00:13:41.308 "data_size": 63488 00:13:41.308 }, 00:13:41.308 { 00:13:41.308 "name": "BaseBdev4", 00:13:41.308 "uuid": "79d4466a-fd4a-5f36-a745-109ae25edf53", 00:13:41.308 "is_configured": true, 00:13:41.308 "data_offset": 2048, 00:13:41.308 "data_size": 63488 00:13:41.308 } 00:13:41.308 ] 00:13:41.308 }' 00:13:41.308 04:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.308 04:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.308 04:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.308 04:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.308 04:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:41.580 [2024-11-21 04:11:41.492206] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:41.840 [2024-11-21 04:11:41.714379] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:42.360 103.50 IOPS, 310.50 MiB/s [2024-11-21T04:11:42.333Z] 04:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:42.360 04:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.360 04:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.360 04:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.360 04:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.360 04:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.360 04:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.360 04:11:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.360 04:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.360 04:11:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.360 04:11:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.360 04:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.360 "name": "raid_bdev1", 00:13:42.360 "uuid": "1b015646-7686-4770-b33e-9a6a1bf5c475", 00:13:42.360 "strip_size_kb": 0, 00:13:42.360 "state": "online", 00:13:42.360 "raid_level": "raid1", 00:13:42.360 "superblock": true, 00:13:42.360 "num_base_bdevs": 4, 00:13:42.360 "num_base_bdevs_discovered": 3, 00:13:42.360 "num_base_bdevs_operational": 3, 00:13:42.360 "process": { 00:13:42.360 "type": "rebuild", 00:13:42.360 "target": "spare", 00:13:42.360 "progress": { 00:13:42.360 "blocks": 47104, 00:13:42.360 "percent": 74 00:13:42.360 } 00:13:42.360 }, 00:13:42.360 "base_bdevs_list": [ 00:13:42.360 { 00:13:42.360 "name": "spare", 00:13:42.360 "uuid": "eff98b02-a460-5034-9992-b4af1d65bf25", 00:13:42.360 "is_configured": true, 00:13:42.360 "data_offset": 2048, 00:13:42.360 "data_size": 63488 00:13:42.360 }, 00:13:42.360 { 00:13:42.360 "name": null, 00:13:42.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.360 "is_configured": false, 00:13:42.360 "data_offset": 0, 00:13:42.360 "data_size": 63488 00:13:42.360 }, 00:13:42.360 { 00:13:42.360 "name": "BaseBdev3", 00:13:42.360 "uuid": "d6f6c919-5810-54c5-8aab-9278c3cead9d", 00:13:42.360 "is_configured": true, 00:13:42.360 "data_offset": 2048, 00:13:42.360 "data_size": 63488 00:13:42.360 }, 00:13:42.360 { 00:13:42.360 "name": "BaseBdev4", 00:13:42.360 "uuid": "79d4466a-fd4a-5f36-a745-109ae25edf53", 00:13:42.360 "is_configured": true, 00:13:42.360 "data_offset": 2048, 00:13:42.360 "data_size": 63488 00:13:42.360 } 00:13:42.360 ] 00:13:42.360 }' 00:13:42.360 04:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.360 04:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:42.360 04:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.360 04:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:42.360 04:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:43.300 [2024-11-21 04:11:42.918726] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:43.300 95.00 IOPS, 285.00 MiB/s [2024-11-21T04:11:43.273Z] [2024-11-21 04:11:43.023467] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:43.300 [2024-11-21 04:11:43.029346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.300 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:43.300 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.300 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.300 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.300 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.300 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.301 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.301 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.301 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.301 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.301 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.561 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.561 "name": "raid_bdev1", 00:13:43.561 "uuid": "1b015646-7686-4770-b33e-9a6a1bf5c475", 00:13:43.561 "strip_size_kb": 0, 00:13:43.561 "state": "online", 00:13:43.561 "raid_level": "raid1", 00:13:43.561 "superblock": true, 00:13:43.561 "num_base_bdevs": 4, 00:13:43.561 "num_base_bdevs_discovered": 3, 00:13:43.561 "num_base_bdevs_operational": 3, 00:13:43.561 "base_bdevs_list": [ 00:13:43.561 { 00:13:43.561 "name": "spare", 00:13:43.561 "uuid": "eff98b02-a460-5034-9992-b4af1d65bf25", 00:13:43.561 "is_configured": true, 00:13:43.561 "data_offset": 2048, 00:13:43.561 "data_size": 63488 00:13:43.561 }, 00:13:43.561 { 00:13:43.561 "name": null, 00:13:43.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.561 "is_configured": false, 00:13:43.561 "data_offset": 0, 00:13:43.561 "data_size": 63488 00:13:43.561 }, 00:13:43.561 { 00:13:43.561 "name": "BaseBdev3", 00:13:43.561 "uuid": "d6f6c919-5810-54c5-8aab-9278c3cead9d", 00:13:43.561 "is_configured": true, 00:13:43.561 "data_offset": 2048, 00:13:43.561 "data_size": 63488 00:13:43.561 }, 00:13:43.561 { 00:13:43.561 "name": "BaseBdev4", 00:13:43.561 "uuid": "79d4466a-fd4a-5f36-a745-109ae25edf53", 00:13:43.561 "is_configured": true, 00:13:43.561 "data_offset": 2048, 00:13:43.561 "data_size": 63488 00:13:43.561 } 00:13:43.561 ] 00:13:43.561 }' 00:13:43.561 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.561 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:43.561 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.561 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:43.561 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:43.561 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:43.561 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.561 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:43.561 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:43.561 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.561 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.561 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.561 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.561 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.561 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.561 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.561 "name": "raid_bdev1", 00:13:43.561 "uuid": "1b015646-7686-4770-b33e-9a6a1bf5c475", 00:13:43.561 "strip_size_kb": 0, 00:13:43.561 "state": "online", 00:13:43.561 "raid_level": "raid1", 00:13:43.561 "superblock": true, 00:13:43.561 "num_base_bdevs": 4, 00:13:43.561 "num_base_bdevs_discovered": 3, 00:13:43.561 "num_base_bdevs_operational": 3, 00:13:43.561 "base_bdevs_list": [ 00:13:43.561 { 00:13:43.561 "name": "spare", 00:13:43.561 "uuid": "eff98b02-a460-5034-9992-b4af1d65bf25", 00:13:43.561 "is_configured": true, 00:13:43.561 "data_offset": 2048, 00:13:43.561 "data_size": 63488 00:13:43.561 }, 00:13:43.561 { 00:13:43.561 "name": null, 00:13:43.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.561 "is_configured": false, 00:13:43.561 "data_offset": 0, 00:13:43.561 "data_size": 63488 00:13:43.561 }, 00:13:43.561 { 00:13:43.561 "name": "BaseBdev3", 00:13:43.561 "uuid": "d6f6c919-5810-54c5-8aab-9278c3cead9d", 00:13:43.561 "is_configured": true, 00:13:43.561 "data_offset": 2048, 00:13:43.561 "data_size": 63488 00:13:43.561 }, 00:13:43.561 { 00:13:43.561 "name": "BaseBdev4", 00:13:43.561 "uuid": "79d4466a-fd4a-5f36-a745-109ae25edf53", 00:13:43.561 "is_configured": true, 00:13:43.561 "data_offset": 2048, 00:13:43.562 "data_size": 63488 00:13:43.562 } 00:13:43.562 ] 00:13:43.562 }' 00:13:43.562 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.562 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:43.562 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.562 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.562 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:43.562 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.562 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.562 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.562 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.562 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.562 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.562 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.562 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.562 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.562 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.562 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.562 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.562 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.822 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.822 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.822 "name": "raid_bdev1", 00:13:43.822 "uuid": "1b015646-7686-4770-b33e-9a6a1bf5c475", 00:13:43.822 "strip_size_kb": 0, 00:13:43.822 "state": "online", 00:13:43.822 "raid_level": "raid1", 00:13:43.822 "superblock": true, 00:13:43.822 "num_base_bdevs": 4, 00:13:43.822 "num_base_bdevs_discovered": 3, 00:13:43.822 "num_base_bdevs_operational": 3, 00:13:43.822 "base_bdevs_list": [ 00:13:43.822 { 00:13:43.822 "name": "spare", 00:13:43.822 "uuid": "eff98b02-a460-5034-9992-b4af1d65bf25", 00:13:43.822 "is_configured": true, 00:13:43.822 "data_offset": 2048, 00:13:43.822 "data_size": 63488 00:13:43.822 }, 00:13:43.822 { 00:13:43.822 "name": null, 00:13:43.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.822 "is_configured": false, 00:13:43.822 "data_offset": 0, 00:13:43.822 "data_size": 63488 00:13:43.822 }, 00:13:43.822 { 00:13:43.822 "name": "BaseBdev3", 00:13:43.822 "uuid": "d6f6c919-5810-54c5-8aab-9278c3cead9d", 00:13:43.822 "is_configured": true, 00:13:43.822 "data_offset": 2048, 00:13:43.822 "data_size": 63488 00:13:43.822 }, 00:13:43.822 { 00:13:43.822 "name": "BaseBdev4", 00:13:43.822 "uuid": "79d4466a-fd4a-5f36-a745-109ae25edf53", 00:13:43.822 "is_configured": true, 00:13:43.822 "data_offset": 2048, 00:13:43.822 "data_size": 63488 00:13:43.822 } 00:13:43.822 ] 00:13:43.822 }' 00:13:43.822 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.822 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.083 86.25 IOPS, 258.75 MiB/s [2024-11-21T04:11:44.056Z] 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:44.083 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.083 04:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.083 [2024-11-21 04:11:43.965232] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:44.083 [2024-11-21 04:11:43.965273] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:44.083 00:13:44.083 Latency(us) 00:13:44.083 [2024-11-21T04:11:44.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.083 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:44.083 raid_bdev1 : 8.10 85.46 256.39 0.00 0.00 15490.31 293.34 119052.30 00:13:44.083 [2024-11-21T04:11:44.056Z] =================================================================================================================== 00:13:44.083 [2024-11-21T04:11:44.056Z] Total : 85.46 256.39 0.00 0.00 15490.31 293.34 119052.30 00:13:44.083 [2024-11-21 04:11:44.008735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.083 [2024-11-21 04:11:44.008788] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.083 [2024-11-21 04:11:44.008935] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.083 [2024-11-21 04:11:44.008954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:44.083 { 00:13:44.083 "results": [ 00:13:44.083 { 00:13:44.083 "job": "raid_bdev1", 00:13:44.083 "core_mask": "0x1", 00:13:44.083 "workload": "randrw", 00:13:44.083 "percentage": 50, 00:13:44.083 "status": "finished", 00:13:44.083 "queue_depth": 2, 00:13:44.083 "io_size": 3145728, 00:13:44.083 "runtime": 8.096891, 00:13:44.083 "iops": 85.46490251628681, 00:13:44.083 "mibps": 256.3947075488604, 00:13:44.083 "io_failed": 0, 00:13:44.083 "io_timeout": 0, 00:13:44.083 "avg_latency_us": 15490.31270414216, 00:13:44.083 "min_latency_us": 293.3379912663755, 00:13:44.083 "max_latency_us": 119052.29694323144 00:13:44.083 } 00:13:44.083 ], 00:13:44.083 "core_count": 1 00:13:44.083 } 00:13:44.083 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.083 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:44.083 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.083 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.083 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.083 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.083 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:44.083 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:44.083 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:44.083 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:44.083 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.083 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:44.083 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:44.083 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:44.083 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:44.083 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:44.083 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:44.083 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.083 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:44.344 /dev/nbd0 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:44.344 1+0 records in 00:13:44.344 1+0 records out 00:13:44.344 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460257 s, 8.9 MB/s 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.344 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:44.605 /dev/nbd1 00:13:44.605 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:44.605 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:44.605 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:44.605 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:44.605 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:44.605 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:44.605 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:44.605 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:44.605 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:44.605 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:44.605 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:44.605 1+0 records in 00:13:44.605 1+0 records out 00:13:44.605 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273411 s, 15.0 MB/s 00:13:44.605 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.605 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:44.605 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.605 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:44.605 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:44.605 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.605 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.605 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.866 04:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:45.127 /dev/nbd1 00:13:45.127 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:45.127 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:45.127 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:45.127 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:45.127 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:45.127 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:45.127 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:45.127 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:45.127 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:45.127 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:45.127 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:45.127 1+0 records in 00:13:45.127 1+0 records out 00:13:45.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454654 s, 9.0 MB/s 00:13:45.127 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:45.127 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:45.127 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:45.127 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:45.127 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:45.127 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:45.127 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:45.127 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:45.387 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:45.387 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:45.387 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:45.387 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:45.387 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:45.387 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:45.387 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:45.387 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:45.387 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:45.387 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:45.387 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:45.387 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:45.387 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.648 [2024-11-21 04:11:45.579078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:45.648 [2024-11-21 04:11:45.579145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.648 [2024-11-21 04:11:45.579167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:45.648 [2024-11-21 04:11:45.579180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.648 [2024-11-21 04:11:45.581860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.648 [2024-11-21 04:11:45.581900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:45.648 [2024-11-21 04:11:45.581996] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:45.648 [2024-11-21 04:11:45.582053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:45.648 [2024-11-21 04:11:45.582220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:45.648 [2024-11-21 04:11:45.582391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:45.648 spare 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.648 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.908 [2024-11-21 04:11:45.682299] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:13:45.908 [2024-11-21 04:11:45.682329] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:45.908 [2024-11-21 04:11:45.682630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000337b0 00:13:45.908 [2024-11-21 04:11:45.682804] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:13:45.908 [2024-11-21 04:11:45.682853] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:13:45.908 [2024-11-21 04:11:45.683031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.908 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.908 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:45.908 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.908 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.908 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.908 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.908 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.908 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.908 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.908 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.908 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.908 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.908 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.908 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.908 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.908 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.908 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.908 "name": "raid_bdev1", 00:13:45.908 "uuid": "1b015646-7686-4770-b33e-9a6a1bf5c475", 00:13:45.908 "strip_size_kb": 0, 00:13:45.908 "state": "online", 00:13:45.908 "raid_level": "raid1", 00:13:45.908 "superblock": true, 00:13:45.908 "num_base_bdevs": 4, 00:13:45.908 "num_base_bdevs_discovered": 3, 00:13:45.908 "num_base_bdevs_operational": 3, 00:13:45.908 "base_bdevs_list": [ 00:13:45.908 { 00:13:45.908 "name": "spare", 00:13:45.908 "uuid": "eff98b02-a460-5034-9992-b4af1d65bf25", 00:13:45.908 "is_configured": true, 00:13:45.908 "data_offset": 2048, 00:13:45.908 "data_size": 63488 00:13:45.908 }, 00:13:45.908 { 00:13:45.908 "name": null, 00:13:45.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.908 "is_configured": false, 00:13:45.908 "data_offset": 2048, 00:13:45.908 "data_size": 63488 00:13:45.908 }, 00:13:45.908 { 00:13:45.908 "name": "BaseBdev3", 00:13:45.908 "uuid": "d6f6c919-5810-54c5-8aab-9278c3cead9d", 00:13:45.908 "is_configured": true, 00:13:45.908 "data_offset": 2048, 00:13:45.908 "data_size": 63488 00:13:45.908 }, 00:13:45.908 { 00:13:45.908 "name": "BaseBdev4", 00:13:45.908 "uuid": "79d4466a-fd4a-5f36-a745-109ae25edf53", 00:13:45.908 "is_configured": true, 00:13:45.908 "data_offset": 2048, 00:13:45.908 "data_size": 63488 00:13:45.908 } 00:13:45.908 ] 00:13:45.908 }' 00:13:45.908 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.908 04:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.168 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:46.168 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.168 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:46.168 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:46.168 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.168 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.168 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.168 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.168 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.428 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.428 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.428 "name": "raid_bdev1", 00:13:46.428 "uuid": "1b015646-7686-4770-b33e-9a6a1bf5c475", 00:13:46.428 "strip_size_kb": 0, 00:13:46.428 "state": "online", 00:13:46.428 "raid_level": "raid1", 00:13:46.428 "superblock": true, 00:13:46.428 "num_base_bdevs": 4, 00:13:46.428 "num_base_bdevs_discovered": 3, 00:13:46.428 "num_base_bdevs_operational": 3, 00:13:46.428 "base_bdevs_list": [ 00:13:46.428 { 00:13:46.428 "name": "spare", 00:13:46.428 "uuid": "eff98b02-a460-5034-9992-b4af1d65bf25", 00:13:46.428 "is_configured": true, 00:13:46.428 "data_offset": 2048, 00:13:46.428 "data_size": 63488 00:13:46.429 }, 00:13:46.429 { 00:13:46.429 "name": null, 00:13:46.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.429 "is_configured": false, 00:13:46.429 "data_offset": 2048, 00:13:46.429 "data_size": 63488 00:13:46.429 }, 00:13:46.429 { 00:13:46.429 "name": "BaseBdev3", 00:13:46.429 "uuid": "d6f6c919-5810-54c5-8aab-9278c3cead9d", 00:13:46.429 "is_configured": true, 00:13:46.429 "data_offset": 2048, 00:13:46.429 "data_size": 63488 00:13:46.429 }, 00:13:46.429 { 00:13:46.429 "name": "BaseBdev4", 00:13:46.429 "uuid": "79d4466a-fd4a-5f36-a745-109ae25edf53", 00:13:46.429 "is_configured": true, 00:13:46.429 "data_offset": 2048, 00:13:46.429 "data_size": 63488 00:13:46.429 } 00:13:46.429 ] 00:13:46.429 }' 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.429 [2024-11-21 04:11:46.329979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.429 "name": "raid_bdev1", 00:13:46.429 "uuid": "1b015646-7686-4770-b33e-9a6a1bf5c475", 00:13:46.429 "strip_size_kb": 0, 00:13:46.429 "state": "online", 00:13:46.429 "raid_level": "raid1", 00:13:46.429 "superblock": true, 00:13:46.429 "num_base_bdevs": 4, 00:13:46.429 "num_base_bdevs_discovered": 2, 00:13:46.429 "num_base_bdevs_operational": 2, 00:13:46.429 "base_bdevs_list": [ 00:13:46.429 { 00:13:46.429 "name": null, 00:13:46.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.429 "is_configured": false, 00:13:46.429 "data_offset": 0, 00:13:46.429 "data_size": 63488 00:13:46.429 }, 00:13:46.429 { 00:13:46.429 "name": null, 00:13:46.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.429 "is_configured": false, 00:13:46.429 "data_offset": 2048, 00:13:46.429 "data_size": 63488 00:13:46.429 }, 00:13:46.429 { 00:13:46.429 "name": "BaseBdev3", 00:13:46.429 "uuid": "d6f6c919-5810-54c5-8aab-9278c3cead9d", 00:13:46.429 "is_configured": true, 00:13:46.429 "data_offset": 2048, 00:13:46.429 "data_size": 63488 00:13:46.429 }, 00:13:46.429 { 00:13:46.429 "name": "BaseBdev4", 00:13:46.429 "uuid": "79d4466a-fd4a-5f36-a745-109ae25edf53", 00:13:46.429 "is_configured": true, 00:13:46.429 "data_offset": 2048, 00:13:46.429 "data_size": 63488 00:13:46.429 } 00:13:46.429 ] 00:13:46.429 }' 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.429 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.000 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:47.000 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.000 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.000 [2024-11-21 04:11:46.793274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:47.000 [2024-11-21 04:11:46.793537] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:47.000 [2024-11-21 04:11:46.793563] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:47.000 [2024-11-21 04:11:46.793613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:47.000 [2024-11-21 04:11:46.801582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033880 00:13:47.000 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.000 04:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:47.000 [2024-11-21 04:11:46.803861] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:47.942 04:11:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.942 04:11:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.942 04:11:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.942 04:11:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.942 04:11:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.942 04:11:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.942 04:11:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.942 04:11:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.942 04:11:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.942 04:11:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.942 04:11:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.942 "name": "raid_bdev1", 00:13:47.942 "uuid": "1b015646-7686-4770-b33e-9a6a1bf5c475", 00:13:47.942 "strip_size_kb": 0, 00:13:47.942 "state": "online", 00:13:47.942 "raid_level": "raid1", 00:13:47.942 "superblock": true, 00:13:47.942 "num_base_bdevs": 4, 00:13:47.942 "num_base_bdevs_discovered": 3, 00:13:47.942 "num_base_bdevs_operational": 3, 00:13:47.942 "process": { 00:13:47.942 "type": "rebuild", 00:13:47.942 "target": "spare", 00:13:47.942 "progress": { 00:13:47.942 "blocks": 20480, 00:13:47.942 "percent": 32 00:13:47.942 } 00:13:47.942 }, 00:13:47.942 "base_bdevs_list": [ 00:13:47.942 { 00:13:47.942 "name": "spare", 00:13:47.942 "uuid": "eff98b02-a460-5034-9992-b4af1d65bf25", 00:13:47.942 "is_configured": true, 00:13:47.942 "data_offset": 2048, 00:13:47.942 "data_size": 63488 00:13:47.942 }, 00:13:47.942 { 00:13:47.942 "name": null, 00:13:47.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.942 "is_configured": false, 00:13:47.942 "data_offset": 2048, 00:13:47.942 "data_size": 63488 00:13:47.942 }, 00:13:47.942 { 00:13:47.942 "name": "BaseBdev3", 00:13:47.942 "uuid": "d6f6c919-5810-54c5-8aab-9278c3cead9d", 00:13:47.942 "is_configured": true, 00:13:47.942 "data_offset": 2048, 00:13:47.942 "data_size": 63488 00:13:47.942 }, 00:13:47.942 { 00:13:47.942 "name": "BaseBdev4", 00:13:47.942 "uuid": "79d4466a-fd4a-5f36-a745-109ae25edf53", 00:13:47.942 "is_configured": true, 00:13:47.942 "data_offset": 2048, 00:13:47.942 "data_size": 63488 00:13:47.942 } 00:13:47.942 ] 00:13:47.942 }' 00:13:47.942 04:11:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.942 04:11:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.942 04:11:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.203 04:11:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:48.203 04:11:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:48.203 04:11:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.203 04:11:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.203 [2024-11-21 04:11:47.940335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:48.203 [2024-11-21 04:11:48.011734] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:48.203 [2024-11-21 04:11:48.011807] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.203 [2024-11-21 04:11:48.011823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:48.203 [2024-11-21 04:11:48.011833] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:48.203 04:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.203 04:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:48.203 04:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.203 04:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.203 04:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.203 04:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.203 04:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:48.203 04:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.203 04:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.203 04:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.203 04:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.203 04:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.203 04:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.203 04:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.203 04:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.203 04:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.203 04:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.203 "name": "raid_bdev1", 00:13:48.203 "uuid": "1b015646-7686-4770-b33e-9a6a1bf5c475", 00:13:48.203 "strip_size_kb": 0, 00:13:48.203 "state": "online", 00:13:48.203 "raid_level": "raid1", 00:13:48.203 "superblock": true, 00:13:48.203 "num_base_bdevs": 4, 00:13:48.203 "num_base_bdevs_discovered": 2, 00:13:48.203 "num_base_bdevs_operational": 2, 00:13:48.203 "base_bdevs_list": [ 00:13:48.203 { 00:13:48.203 "name": null, 00:13:48.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.203 "is_configured": false, 00:13:48.203 "data_offset": 0, 00:13:48.203 "data_size": 63488 00:13:48.203 }, 00:13:48.203 { 00:13:48.203 "name": null, 00:13:48.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.203 "is_configured": false, 00:13:48.203 "data_offset": 2048, 00:13:48.203 "data_size": 63488 00:13:48.203 }, 00:13:48.203 { 00:13:48.203 "name": "BaseBdev3", 00:13:48.203 "uuid": "d6f6c919-5810-54c5-8aab-9278c3cead9d", 00:13:48.203 "is_configured": true, 00:13:48.203 "data_offset": 2048, 00:13:48.203 "data_size": 63488 00:13:48.203 }, 00:13:48.203 { 00:13:48.203 "name": "BaseBdev4", 00:13:48.203 "uuid": "79d4466a-fd4a-5f36-a745-109ae25edf53", 00:13:48.203 "is_configured": true, 00:13:48.203 "data_offset": 2048, 00:13:48.203 "data_size": 63488 00:13:48.203 } 00:13:48.203 ] 00:13:48.203 }' 00:13:48.203 04:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.203 04:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.774 04:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:48.774 04:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.774 04:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.774 [2024-11-21 04:11:48.474974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:48.774 [2024-11-21 04:11:48.475330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.774 [2024-11-21 04:11:48.475431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:48.774 [2024-11-21 04:11:48.475516] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.774 [2024-11-21 04:11:48.476091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.774 [2024-11-21 04:11:48.476194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:48.774 [2024-11-21 04:11:48.476392] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:48.774 [2024-11-21 04:11:48.476418] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:48.774 [2024-11-21 04:11:48.476431] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:48.774 [2024-11-21 04:11:48.476548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:48.774 [2024-11-21 04:11:48.484470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033950 00:13:48.774 spare 00:13:48.774 04:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.774 04:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:48.774 [2024-11-21 04:11:48.486711] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:49.714 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.714 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.714 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.714 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.714 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.714 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.714 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.714 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.714 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.714 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.714 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.714 "name": "raid_bdev1", 00:13:49.714 "uuid": "1b015646-7686-4770-b33e-9a6a1bf5c475", 00:13:49.714 "strip_size_kb": 0, 00:13:49.714 "state": "online", 00:13:49.714 "raid_level": "raid1", 00:13:49.714 "superblock": true, 00:13:49.714 "num_base_bdevs": 4, 00:13:49.714 "num_base_bdevs_discovered": 3, 00:13:49.714 "num_base_bdevs_operational": 3, 00:13:49.714 "process": { 00:13:49.714 "type": "rebuild", 00:13:49.714 "target": "spare", 00:13:49.714 "progress": { 00:13:49.714 "blocks": 20480, 00:13:49.714 "percent": 32 00:13:49.714 } 00:13:49.714 }, 00:13:49.714 "base_bdevs_list": [ 00:13:49.714 { 00:13:49.714 "name": "spare", 00:13:49.714 "uuid": "eff98b02-a460-5034-9992-b4af1d65bf25", 00:13:49.714 "is_configured": true, 00:13:49.714 "data_offset": 2048, 00:13:49.714 "data_size": 63488 00:13:49.714 }, 00:13:49.714 { 00:13:49.714 "name": null, 00:13:49.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.714 "is_configured": false, 00:13:49.714 "data_offset": 2048, 00:13:49.714 "data_size": 63488 00:13:49.714 }, 00:13:49.714 { 00:13:49.714 "name": "BaseBdev3", 00:13:49.714 "uuid": "d6f6c919-5810-54c5-8aab-9278c3cead9d", 00:13:49.714 "is_configured": true, 00:13:49.714 "data_offset": 2048, 00:13:49.714 "data_size": 63488 00:13:49.714 }, 00:13:49.714 { 00:13:49.714 "name": "BaseBdev4", 00:13:49.714 "uuid": "79d4466a-fd4a-5f36-a745-109ae25edf53", 00:13:49.714 "is_configured": true, 00:13:49.714 "data_offset": 2048, 00:13:49.714 "data_size": 63488 00:13:49.714 } 00:13:49.714 ] 00:13:49.714 }' 00:13:49.714 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.714 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:49.714 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.714 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:49.714 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:49.714 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.714 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.714 [2024-11-21 04:11:49.638953] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:49.975 [2024-11-21 04:11:49.694783] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:49.975 [2024-11-21 04:11:49.695267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.975 [2024-11-21 04:11:49.695295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:49.975 [2024-11-21 04:11:49.695306] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:49.975 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.975 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:49.975 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.975 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.975 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.975 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.975 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:49.975 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.975 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.975 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.975 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.975 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.975 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.975 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.975 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.975 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.975 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.975 "name": "raid_bdev1", 00:13:49.975 "uuid": "1b015646-7686-4770-b33e-9a6a1bf5c475", 00:13:49.975 "strip_size_kb": 0, 00:13:49.975 "state": "online", 00:13:49.975 "raid_level": "raid1", 00:13:49.975 "superblock": true, 00:13:49.975 "num_base_bdevs": 4, 00:13:49.975 "num_base_bdevs_discovered": 2, 00:13:49.975 "num_base_bdevs_operational": 2, 00:13:49.975 "base_bdevs_list": [ 00:13:49.975 { 00:13:49.975 "name": null, 00:13:49.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.975 "is_configured": false, 00:13:49.975 "data_offset": 0, 00:13:49.975 "data_size": 63488 00:13:49.975 }, 00:13:49.975 { 00:13:49.975 "name": null, 00:13:49.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.975 "is_configured": false, 00:13:49.975 "data_offset": 2048, 00:13:49.975 "data_size": 63488 00:13:49.975 }, 00:13:49.975 { 00:13:49.975 "name": "BaseBdev3", 00:13:49.975 "uuid": "d6f6c919-5810-54c5-8aab-9278c3cead9d", 00:13:49.975 "is_configured": true, 00:13:49.975 "data_offset": 2048, 00:13:49.975 "data_size": 63488 00:13:49.975 }, 00:13:49.975 { 00:13:49.975 "name": "BaseBdev4", 00:13:49.975 "uuid": "79d4466a-fd4a-5f36-a745-109ae25edf53", 00:13:49.975 "is_configured": true, 00:13:49.975 "data_offset": 2048, 00:13:49.975 "data_size": 63488 00:13:49.975 } 00:13:49.975 ] 00:13:49.975 }' 00:13:49.975 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.975 04:11:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.235 04:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:50.235 04:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.235 04:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:50.235 04:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:50.235 04:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.235 04:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.235 04:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.235 04:11:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.235 04:11:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.495 04:11:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.495 04:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.495 "name": "raid_bdev1", 00:13:50.495 "uuid": "1b015646-7686-4770-b33e-9a6a1bf5c475", 00:13:50.495 "strip_size_kb": 0, 00:13:50.495 "state": "online", 00:13:50.495 "raid_level": "raid1", 00:13:50.495 "superblock": true, 00:13:50.495 "num_base_bdevs": 4, 00:13:50.495 "num_base_bdevs_discovered": 2, 00:13:50.495 "num_base_bdevs_operational": 2, 00:13:50.495 "base_bdevs_list": [ 00:13:50.495 { 00:13:50.495 "name": null, 00:13:50.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.495 "is_configured": false, 00:13:50.495 "data_offset": 0, 00:13:50.495 "data_size": 63488 00:13:50.495 }, 00:13:50.495 { 00:13:50.495 "name": null, 00:13:50.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.495 "is_configured": false, 00:13:50.495 "data_offset": 2048, 00:13:50.495 "data_size": 63488 00:13:50.495 }, 00:13:50.495 { 00:13:50.495 "name": "BaseBdev3", 00:13:50.495 "uuid": "d6f6c919-5810-54c5-8aab-9278c3cead9d", 00:13:50.495 "is_configured": true, 00:13:50.495 "data_offset": 2048, 00:13:50.495 "data_size": 63488 00:13:50.495 }, 00:13:50.495 { 00:13:50.495 "name": "BaseBdev4", 00:13:50.495 "uuid": "79d4466a-fd4a-5f36-a745-109ae25edf53", 00:13:50.495 "is_configured": true, 00:13:50.495 "data_offset": 2048, 00:13:50.495 "data_size": 63488 00:13:50.495 } 00:13:50.495 ] 00:13:50.495 }' 00:13:50.495 04:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.495 04:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:50.495 04:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.495 04:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:50.495 04:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:50.495 04:11:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.495 04:11:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.495 04:11:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.495 04:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:50.495 04:11:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.495 04:11:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.495 [2024-11-21 04:11:50.318525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:50.495 [2024-11-21 04:11:50.318823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.495 [2024-11-21 04:11:50.318919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:13:50.495 [2024-11-21 04:11:50.318977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.495 [2024-11-21 04:11:50.319535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.495 [2024-11-21 04:11:50.319632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:50.495 [2024-11-21 04:11:50.319785] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:50.495 [2024-11-21 04:11:50.319810] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:50.495 [2024-11-21 04:11:50.319821] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:50.495 [2024-11-21 04:11:50.319835] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:50.495 BaseBdev1 00:13:50.495 04:11:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.495 04:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:51.437 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:51.437 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.437 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.437 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.437 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.437 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:51.437 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.437 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.437 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.437 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.437 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.437 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.437 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.437 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.437 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.437 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.437 "name": "raid_bdev1", 00:13:51.437 "uuid": "1b015646-7686-4770-b33e-9a6a1bf5c475", 00:13:51.437 "strip_size_kb": 0, 00:13:51.437 "state": "online", 00:13:51.437 "raid_level": "raid1", 00:13:51.437 "superblock": true, 00:13:51.437 "num_base_bdevs": 4, 00:13:51.437 "num_base_bdevs_discovered": 2, 00:13:51.437 "num_base_bdevs_operational": 2, 00:13:51.437 "base_bdevs_list": [ 00:13:51.437 { 00:13:51.437 "name": null, 00:13:51.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.437 "is_configured": false, 00:13:51.437 "data_offset": 0, 00:13:51.437 "data_size": 63488 00:13:51.437 }, 00:13:51.437 { 00:13:51.437 "name": null, 00:13:51.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.437 "is_configured": false, 00:13:51.437 "data_offset": 2048, 00:13:51.437 "data_size": 63488 00:13:51.437 }, 00:13:51.437 { 00:13:51.437 "name": "BaseBdev3", 00:13:51.437 "uuid": "d6f6c919-5810-54c5-8aab-9278c3cead9d", 00:13:51.437 "is_configured": true, 00:13:51.437 "data_offset": 2048, 00:13:51.437 "data_size": 63488 00:13:51.437 }, 00:13:51.437 { 00:13:51.437 "name": "BaseBdev4", 00:13:51.437 "uuid": "79d4466a-fd4a-5f36-a745-109ae25edf53", 00:13:51.437 "is_configured": true, 00:13:51.437 "data_offset": 2048, 00:13:51.437 "data_size": 63488 00:13:51.437 } 00:13:51.437 ] 00:13:51.437 }' 00:13:51.437 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.437 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.009 "name": "raid_bdev1", 00:13:52.009 "uuid": "1b015646-7686-4770-b33e-9a6a1bf5c475", 00:13:52.009 "strip_size_kb": 0, 00:13:52.009 "state": "online", 00:13:52.009 "raid_level": "raid1", 00:13:52.009 "superblock": true, 00:13:52.009 "num_base_bdevs": 4, 00:13:52.009 "num_base_bdevs_discovered": 2, 00:13:52.009 "num_base_bdevs_operational": 2, 00:13:52.009 "base_bdevs_list": [ 00:13:52.009 { 00:13:52.009 "name": null, 00:13:52.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.009 "is_configured": false, 00:13:52.009 "data_offset": 0, 00:13:52.009 "data_size": 63488 00:13:52.009 }, 00:13:52.009 { 00:13:52.009 "name": null, 00:13:52.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.009 "is_configured": false, 00:13:52.009 "data_offset": 2048, 00:13:52.009 "data_size": 63488 00:13:52.009 }, 00:13:52.009 { 00:13:52.009 "name": "BaseBdev3", 00:13:52.009 "uuid": "d6f6c919-5810-54c5-8aab-9278c3cead9d", 00:13:52.009 "is_configured": true, 00:13:52.009 "data_offset": 2048, 00:13:52.009 "data_size": 63488 00:13:52.009 }, 00:13:52.009 { 00:13:52.009 "name": "BaseBdev4", 00:13:52.009 "uuid": "79d4466a-fd4a-5f36-a745-109ae25edf53", 00:13:52.009 "is_configured": true, 00:13:52.009 "data_offset": 2048, 00:13:52.009 "data_size": 63488 00:13:52.009 } 00:13:52.009 ] 00:13:52.009 }' 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.009 [2024-11-21 04:11:51.908301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.009 [2024-11-21 04:11:51.908520] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:52.009 [2024-11-21 04:11:51.908543] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:52.009 request: 00:13:52.009 { 00:13:52.009 "base_bdev": "BaseBdev1", 00:13:52.009 "raid_bdev": "raid_bdev1", 00:13:52.009 "method": "bdev_raid_add_base_bdev", 00:13:52.009 "req_id": 1 00:13:52.009 } 00:13:52.009 Got JSON-RPC error response 00:13:52.009 response: 00:13:52.009 { 00:13:52.009 "code": -22, 00:13:52.009 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:52.009 } 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:52.009 04:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:52.950 04:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:52.950 04:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.950 04:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.950 04:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.950 04:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.950 04:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:52.950 04:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.210 04:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.210 04:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.210 04:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.210 04:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.210 04:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.210 04:11:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.210 04:11:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.210 04:11:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.210 04:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.210 "name": "raid_bdev1", 00:13:53.210 "uuid": "1b015646-7686-4770-b33e-9a6a1bf5c475", 00:13:53.210 "strip_size_kb": 0, 00:13:53.210 "state": "online", 00:13:53.210 "raid_level": "raid1", 00:13:53.210 "superblock": true, 00:13:53.210 "num_base_bdevs": 4, 00:13:53.210 "num_base_bdevs_discovered": 2, 00:13:53.210 "num_base_bdevs_operational": 2, 00:13:53.210 "base_bdevs_list": [ 00:13:53.210 { 00:13:53.210 "name": null, 00:13:53.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.210 "is_configured": false, 00:13:53.210 "data_offset": 0, 00:13:53.210 "data_size": 63488 00:13:53.210 }, 00:13:53.210 { 00:13:53.210 "name": null, 00:13:53.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.210 "is_configured": false, 00:13:53.210 "data_offset": 2048, 00:13:53.210 "data_size": 63488 00:13:53.210 }, 00:13:53.210 { 00:13:53.210 "name": "BaseBdev3", 00:13:53.210 "uuid": "d6f6c919-5810-54c5-8aab-9278c3cead9d", 00:13:53.210 "is_configured": true, 00:13:53.210 "data_offset": 2048, 00:13:53.210 "data_size": 63488 00:13:53.210 }, 00:13:53.210 { 00:13:53.210 "name": "BaseBdev4", 00:13:53.210 "uuid": "79d4466a-fd4a-5f36-a745-109ae25edf53", 00:13:53.210 "is_configured": true, 00:13:53.210 "data_offset": 2048, 00:13:53.210 "data_size": 63488 00:13:53.210 } 00:13:53.210 ] 00:13:53.210 }' 00:13:53.210 04:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.210 04:11:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.470 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:53.470 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.470 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:53.471 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:53.471 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.471 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.471 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.471 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.471 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.471 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.471 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.471 "name": "raid_bdev1", 00:13:53.471 "uuid": "1b015646-7686-4770-b33e-9a6a1bf5c475", 00:13:53.471 "strip_size_kb": 0, 00:13:53.471 "state": "online", 00:13:53.471 "raid_level": "raid1", 00:13:53.471 "superblock": true, 00:13:53.471 "num_base_bdevs": 4, 00:13:53.471 "num_base_bdevs_discovered": 2, 00:13:53.471 "num_base_bdevs_operational": 2, 00:13:53.471 "base_bdevs_list": [ 00:13:53.471 { 00:13:53.471 "name": null, 00:13:53.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.471 "is_configured": false, 00:13:53.471 "data_offset": 0, 00:13:53.471 "data_size": 63488 00:13:53.471 }, 00:13:53.471 { 00:13:53.471 "name": null, 00:13:53.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.471 "is_configured": false, 00:13:53.471 "data_offset": 2048, 00:13:53.471 "data_size": 63488 00:13:53.471 }, 00:13:53.471 { 00:13:53.471 "name": "BaseBdev3", 00:13:53.471 "uuid": "d6f6c919-5810-54c5-8aab-9278c3cead9d", 00:13:53.471 "is_configured": true, 00:13:53.471 "data_offset": 2048, 00:13:53.471 "data_size": 63488 00:13:53.471 }, 00:13:53.471 { 00:13:53.471 "name": "BaseBdev4", 00:13:53.471 "uuid": "79d4466a-fd4a-5f36-a745-109ae25edf53", 00:13:53.471 "is_configured": true, 00:13:53.471 "data_offset": 2048, 00:13:53.471 "data_size": 63488 00:13:53.471 } 00:13:53.471 ] 00:13:53.471 }' 00:13:53.471 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.731 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:53.731 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.731 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:53.731 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 89794 00:13:53.731 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 89794 ']' 00:13:53.731 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 89794 00:13:53.731 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:53.731 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:53.731 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89794 00:13:53.731 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:53.731 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:53.731 killing process with pid 89794 00:13:53.731 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89794' 00:13:53.731 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 89794 00:13:53.731 Received shutdown signal, test time was about 17.684153 seconds 00:13:53.731 00:13:53.731 Latency(us) 00:13:53.731 [2024-11-21T04:11:53.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.731 [2024-11-21T04:11:53.704Z] =================================================================================================================== 00:13:53.731 [2024-11-21T04:11:53.704Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:53.731 [2024-11-21 04:11:53.573838] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:53.731 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 89794 00:13:53.731 [2024-11-21 04:11:53.574011] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.731 [2024-11-21 04:11:53.574093] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.731 [2024-11-21 04:11:53.574110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:13:53.731 [2024-11-21 04:11:53.659462] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:54.303 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:54.303 00:13:54.303 real 0m19.816s 00:13:54.303 user 0m26.233s 00:13:54.303 sys 0m2.743s 00:13:54.303 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.303 04:11:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.303 ************************************ 00:13:54.303 END TEST raid_rebuild_test_sb_io 00:13:54.303 ************************************ 00:13:54.303 04:11:54 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:13:54.303 04:11:54 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:13:54.303 04:11:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:54.303 04:11:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.303 04:11:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:54.303 ************************************ 00:13:54.303 START TEST raid5f_state_function_test 00:13:54.303 ************************************ 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=90500 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90500' 00:13:54.303 Process raid pid: 90500 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 90500 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 90500 ']' 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:54.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:54.303 04:11:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.303 [2024-11-21 04:11:54.157504] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:13:54.303 [2024-11-21 04:11:54.157640] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.563 [2024-11-21 04:11:54.312738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.563 [2024-11-21 04:11:54.350696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.563 [2024-11-21 04:11:54.425996] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.563 [2024-11-21 04:11:54.426050] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.135 04:11:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:55.135 04:11:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:55.135 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:55.135 04:11:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.135 04:11:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.135 [2024-11-21 04:11:54.976834] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:55.135 [2024-11-21 04:11:54.976896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:55.135 [2024-11-21 04:11:54.976906] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:55.135 [2024-11-21 04:11:54.976934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:55.135 [2024-11-21 04:11:54.976940] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:55.135 [2024-11-21 04:11:54.976951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:55.135 04:11:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.135 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:55.135 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.135 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.135 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.135 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.135 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.135 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.135 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.135 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.135 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.135 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.135 04:11:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.135 04:11:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.135 04:11:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.135 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.135 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.135 "name": "Existed_Raid", 00:13:55.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.135 "strip_size_kb": 64, 00:13:55.135 "state": "configuring", 00:13:55.135 "raid_level": "raid5f", 00:13:55.135 "superblock": false, 00:13:55.135 "num_base_bdevs": 3, 00:13:55.135 "num_base_bdevs_discovered": 0, 00:13:55.135 "num_base_bdevs_operational": 3, 00:13:55.135 "base_bdevs_list": [ 00:13:55.135 { 00:13:55.135 "name": "BaseBdev1", 00:13:55.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.135 "is_configured": false, 00:13:55.135 "data_offset": 0, 00:13:55.135 "data_size": 0 00:13:55.135 }, 00:13:55.135 { 00:13:55.135 "name": "BaseBdev2", 00:13:55.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.135 "is_configured": false, 00:13:55.135 "data_offset": 0, 00:13:55.135 "data_size": 0 00:13:55.135 }, 00:13:55.135 { 00:13:55.135 "name": "BaseBdev3", 00:13:55.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.135 "is_configured": false, 00:13:55.135 "data_offset": 0, 00:13:55.135 "data_size": 0 00:13:55.135 } 00:13:55.135 ] 00:13:55.135 }' 00:13:55.135 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.135 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.706 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:55.706 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.706 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.706 [2024-11-21 04:11:55.376078] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:55.706 [2024-11-21 04:11:55.376133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:13:55.706 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.706 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:55.706 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.706 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.706 [2024-11-21 04:11:55.388072] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:55.706 [2024-11-21 04:11:55.388112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:55.706 [2024-11-21 04:11:55.388125] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:55.706 [2024-11-21 04:11:55.388135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:55.706 [2024-11-21 04:11:55.388156] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:55.706 [2024-11-21 04:11:55.388165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:55.706 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.706 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:55.706 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.706 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.706 [2024-11-21 04:11:55.414782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:55.706 BaseBdev1 00:13:55.706 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.706 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:55.706 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:55.706 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:55.706 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:55.706 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:55.706 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:55.706 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:55.706 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.706 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.706 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.706 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:55.706 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.706 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.706 [ 00:13:55.706 { 00:13:55.706 "name": "BaseBdev1", 00:13:55.706 "aliases": [ 00:13:55.706 "30d02871-a58e-4a9f-a94f-394ce627664e" 00:13:55.706 ], 00:13:55.706 "product_name": "Malloc disk", 00:13:55.706 "block_size": 512, 00:13:55.706 "num_blocks": 65536, 00:13:55.706 "uuid": "30d02871-a58e-4a9f-a94f-394ce627664e", 00:13:55.706 "assigned_rate_limits": { 00:13:55.706 "rw_ios_per_sec": 0, 00:13:55.706 "rw_mbytes_per_sec": 0, 00:13:55.706 "r_mbytes_per_sec": 0, 00:13:55.706 "w_mbytes_per_sec": 0 00:13:55.706 }, 00:13:55.706 "claimed": true, 00:13:55.706 "claim_type": "exclusive_write", 00:13:55.706 "zoned": false, 00:13:55.706 "supported_io_types": { 00:13:55.706 "read": true, 00:13:55.706 "write": true, 00:13:55.706 "unmap": true, 00:13:55.706 "flush": true, 00:13:55.706 "reset": true, 00:13:55.706 "nvme_admin": false, 00:13:55.706 "nvme_io": false, 00:13:55.706 "nvme_io_md": false, 00:13:55.706 "write_zeroes": true, 00:13:55.706 "zcopy": true, 00:13:55.706 "get_zone_info": false, 00:13:55.706 "zone_management": false, 00:13:55.706 "zone_append": false, 00:13:55.706 "compare": false, 00:13:55.706 "compare_and_write": false, 00:13:55.706 "abort": true, 00:13:55.706 "seek_hole": false, 00:13:55.707 "seek_data": false, 00:13:55.707 "copy": true, 00:13:55.707 "nvme_iov_md": false 00:13:55.707 }, 00:13:55.707 "memory_domains": [ 00:13:55.707 { 00:13:55.707 "dma_device_id": "system", 00:13:55.707 "dma_device_type": 1 00:13:55.707 }, 00:13:55.707 { 00:13:55.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.707 "dma_device_type": 2 00:13:55.707 } 00:13:55.707 ], 00:13:55.707 "driver_specific": {} 00:13:55.707 } 00:13:55.707 ] 00:13:55.707 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.707 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:55.707 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:55.707 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.707 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.707 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.707 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.707 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.707 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.707 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.707 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.707 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.707 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.707 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.707 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.707 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.707 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.707 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.707 "name": "Existed_Raid", 00:13:55.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.707 "strip_size_kb": 64, 00:13:55.707 "state": "configuring", 00:13:55.707 "raid_level": "raid5f", 00:13:55.707 "superblock": false, 00:13:55.707 "num_base_bdevs": 3, 00:13:55.707 "num_base_bdevs_discovered": 1, 00:13:55.707 "num_base_bdevs_operational": 3, 00:13:55.707 "base_bdevs_list": [ 00:13:55.707 { 00:13:55.707 "name": "BaseBdev1", 00:13:55.707 "uuid": "30d02871-a58e-4a9f-a94f-394ce627664e", 00:13:55.707 "is_configured": true, 00:13:55.707 "data_offset": 0, 00:13:55.707 "data_size": 65536 00:13:55.707 }, 00:13:55.707 { 00:13:55.707 "name": "BaseBdev2", 00:13:55.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.707 "is_configured": false, 00:13:55.707 "data_offset": 0, 00:13:55.707 "data_size": 0 00:13:55.707 }, 00:13:55.707 { 00:13:55.707 "name": "BaseBdev3", 00:13:55.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.707 "is_configured": false, 00:13:55.707 "data_offset": 0, 00:13:55.707 "data_size": 0 00:13:55.707 } 00:13:55.707 ] 00:13:55.707 }' 00:13:55.707 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.707 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.968 [2024-11-21 04:11:55.866025] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:55.968 [2024-11-21 04:11:55.866067] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.968 [2024-11-21 04:11:55.878045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:55.968 [2024-11-21 04:11:55.880203] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:55.968 [2024-11-21 04:11:55.880251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:55.968 [2024-11-21 04:11:55.880261] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:55.968 [2024-11-21 04:11:55.880272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.968 "name": "Existed_Raid", 00:13:55.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.968 "strip_size_kb": 64, 00:13:55.968 "state": "configuring", 00:13:55.968 "raid_level": "raid5f", 00:13:55.968 "superblock": false, 00:13:55.968 "num_base_bdevs": 3, 00:13:55.968 "num_base_bdevs_discovered": 1, 00:13:55.968 "num_base_bdevs_operational": 3, 00:13:55.968 "base_bdevs_list": [ 00:13:55.968 { 00:13:55.968 "name": "BaseBdev1", 00:13:55.968 "uuid": "30d02871-a58e-4a9f-a94f-394ce627664e", 00:13:55.968 "is_configured": true, 00:13:55.968 "data_offset": 0, 00:13:55.968 "data_size": 65536 00:13:55.968 }, 00:13:55.968 { 00:13:55.968 "name": "BaseBdev2", 00:13:55.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.968 "is_configured": false, 00:13:55.968 "data_offset": 0, 00:13:55.968 "data_size": 0 00:13:55.968 }, 00:13:55.968 { 00:13:55.968 "name": "BaseBdev3", 00:13:55.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.968 "is_configured": false, 00:13:55.968 "data_offset": 0, 00:13:55.968 "data_size": 0 00:13:55.968 } 00:13:55.968 ] 00:13:55.968 }' 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.968 04:11:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.563 [2024-11-21 04:11:56.293851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:56.563 BaseBdev2 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.563 [ 00:13:56.563 { 00:13:56.563 "name": "BaseBdev2", 00:13:56.563 "aliases": [ 00:13:56.563 "590fbb08-bda0-4093-93dc-aa6debad4824" 00:13:56.563 ], 00:13:56.563 "product_name": "Malloc disk", 00:13:56.563 "block_size": 512, 00:13:56.563 "num_blocks": 65536, 00:13:56.563 "uuid": "590fbb08-bda0-4093-93dc-aa6debad4824", 00:13:56.563 "assigned_rate_limits": { 00:13:56.563 "rw_ios_per_sec": 0, 00:13:56.563 "rw_mbytes_per_sec": 0, 00:13:56.563 "r_mbytes_per_sec": 0, 00:13:56.563 "w_mbytes_per_sec": 0 00:13:56.563 }, 00:13:56.563 "claimed": true, 00:13:56.563 "claim_type": "exclusive_write", 00:13:56.563 "zoned": false, 00:13:56.563 "supported_io_types": { 00:13:56.563 "read": true, 00:13:56.563 "write": true, 00:13:56.563 "unmap": true, 00:13:56.563 "flush": true, 00:13:56.563 "reset": true, 00:13:56.563 "nvme_admin": false, 00:13:56.563 "nvme_io": false, 00:13:56.563 "nvme_io_md": false, 00:13:56.563 "write_zeroes": true, 00:13:56.563 "zcopy": true, 00:13:56.563 "get_zone_info": false, 00:13:56.563 "zone_management": false, 00:13:56.563 "zone_append": false, 00:13:56.563 "compare": false, 00:13:56.563 "compare_and_write": false, 00:13:56.563 "abort": true, 00:13:56.563 "seek_hole": false, 00:13:56.563 "seek_data": false, 00:13:56.563 "copy": true, 00:13:56.563 "nvme_iov_md": false 00:13:56.563 }, 00:13:56.563 "memory_domains": [ 00:13:56.563 { 00:13:56.563 "dma_device_id": "system", 00:13:56.563 "dma_device_type": 1 00:13:56.563 }, 00:13:56.563 { 00:13:56.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.563 "dma_device_type": 2 00:13:56.563 } 00:13:56.563 ], 00:13:56.563 "driver_specific": {} 00:13:56.563 } 00:13:56.563 ] 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.563 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.564 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.564 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.564 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.564 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.564 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.564 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.564 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.564 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.564 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.564 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.564 "name": "Existed_Raid", 00:13:56.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.564 "strip_size_kb": 64, 00:13:56.564 "state": "configuring", 00:13:56.564 "raid_level": "raid5f", 00:13:56.564 "superblock": false, 00:13:56.564 "num_base_bdevs": 3, 00:13:56.564 "num_base_bdevs_discovered": 2, 00:13:56.564 "num_base_bdevs_operational": 3, 00:13:56.564 "base_bdevs_list": [ 00:13:56.564 { 00:13:56.564 "name": "BaseBdev1", 00:13:56.564 "uuid": "30d02871-a58e-4a9f-a94f-394ce627664e", 00:13:56.564 "is_configured": true, 00:13:56.564 "data_offset": 0, 00:13:56.564 "data_size": 65536 00:13:56.564 }, 00:13:56.564 { 00:13:56.564 "name": "BaseBdev2", 00:13:56.564 "uuid": "590fbb08-bda0-4093-93dc-aa6debad4824", 00:13:56.564 "is_configured": true, 00:13:56.564 "data_offset": 0, 00:13:56.564 "data_size": 65536 00:13:56.564 }, 00:13:56.564 { 00:13:56.564 "name": "BaseBdev3", 00:13:56.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.564 "is_configured": false, 00:13:56.564 "data_offset": 0, 00:13:56.564 "data_size": 0 00:13:56.564 } 00:13:56.564 ] 00:13:56.564 }' 00:13:56.564 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.564 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.839 [2024-11-21 04:11:56.759870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:56.839 [2024-11-21 04:11:56.760020] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:56.839 [2024-11-21 04:11:56.760056] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:56.839 [2024-11-21 04:11:56.761119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:13:56.839 [2024-11-21 04:11:56.762680] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:56.839 [2024-11-21 04:11:56.762763] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:13:56.839 [2024-11-21 04:11:56.763470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.839 BaseBdev3 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.839 [ 00:13:56.839 { 00:13:56.839 "name": "BaseBdev3", 00:13:56.839 "aliases": [ 00:13:56.839 "d9ef8e30-c066-4254-a55c-177c611ed8ba" 00:13:56.839 ], 00:13:56.839 "product_name": "Malloc disk", 00:13:56.839 "block_size": 512, 00:13:56.839 "num_blocks": 65536, 00:13:56.839 "uuid": "d9ef8e30-c066-4254-a55c-177c611ed8ba", 00:13:56.839 "assigned_rate_limits": { 00:13:56.839 "rw_ios_per_sec": 0, 00:13:56.839 "rw_mbytes_per_sec": 0, 00:13:56.839 "r_mbytes_per_sec": 0, 00:13:56.839 "w_mbytes_per_sec": 0 00:13:56.839 }, 00:13:56.839 "claimed": true, 00:13:56.839 "claim_type": "exclusive_write", 00:13:56.839 "zoned": false, 00:13:56.839 "supported_io_types": { 00:13:56.839 "read": true, 00:13:56.839 "write": true, 00:13:56.839 "unmap": true, 00:13:56.839 "flush": true, 00:13:56.839 "reset": true, 00:13:56.839 "nvme_admin": false, 00:13:56.839 "nvme_io": false, 00:13:56.839 "nvme_io_md": false, 00:13:56.839 "write_zeroes": true, 00:13:56.839 "zcopy": true, 00:13:56.839 "get_zone_info": false, 00:13:56.839 "zone_management": false, 00:13:56.839 "zone_append": false, 00:13:56.839 "compare": false, 00:13:56.839 "compare_and_write": false, 00:13:56.839 "abort": true, 00:13:56.839 "seek_hole": false, 00:13:56.839 "seek_data": false, 00:13:56.839 "copy": true, 00:13:56.839 "nvme_iov_md": false 00:13:56.839 }, 00:13:56.839 "memory_domains": [ 00:13:56.839 { 00:13:56.839 "dma_device_id": "system", 00:13:56.839 "dma_device_type": 1 00:13:56.839 }, 00:13:56.839 { 00:13:56.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.839 "dma_device_type": 2 00:13:56.839 } 00:13:56.839 ], 00:13:56.839 "driver_specific": {} 00:13:56.839 } 00:13:56.839 ] 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.839 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.100 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.100 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.100 "name": "Existed_Raid", 00:13:57.101 "uuid": "5295ad06-4b42-4656-8710-7f02ae0d9e42", 00:13:57.101 "strip_size_kb": 64, 00:13:57.101 "state": "online", 00:13:57.101 "raid_level": "raid5f", 00:13:57.101 "superblock": false, 00:13:57.101 "num_base_bdevs": 3, 00:13:57.101 "num_base_bdevs_discovered": 3, 00:13:57.101 "num_base_bdevs_operational": 3, 00:13:57.101 "base_bdevs_list": [ 00:13:57.101 { 00:13:57.101 "name": "BaseBdev1", 00:13:57.101 "uuid": "30d02871-a58e-4a9f-a94f-394ce627664e", 00:13:57.101 "is_configured": true, 00:13:57.101 "data_offset": 0, 00:13:57.101 "data_size": 65536 00:13:57.101 }, 00:13:57.101 { 00:13:57.101 "name": "BaseBdev2", 00:13:57.101 "uuid": "590fbb08-bda0-4093-93dc-aa6debad4824", 00:13:57.101 "is_configured": true, 00:13:57.101 "data_offset": 0, 00:13:57.101 "data_size": 65536 00:13:57.101 }, 00:13:57.101 { 00:13:57.101 "name": "BaseBdev3", 00:13:57.101 "uuid": "d9ef8e30-c066-4254-a55c-177c611ed8ba", 00:13:57.101 "is_configured": true, 00:13:57.101 "data_offset": 0, 00:13:57.101 "data_size": 65536 00:13:57.101 } 00:13:57.101 ] 00:13:57.101 }' 00:13:57.101 04:11:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.101 04:11:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.361 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:57.361 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:57.361 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:57.361 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:57.361 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:57.361 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:57.361 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:57.361 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.361 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.361 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:57.361 [2024-11-21 04:11:57.202498] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:57.361 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.361 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:57.361 "name": "Existed_Raid", 00:13:57.361 "aliases": [ 00:13:57.361 "5295ad06-4b42-4656-8710-7f02ae0d9e42" 00:13:57.361 ], 00:13:57.361 "product_name": "Raid Volume", 00:13:57.361 "block_size": 512, 00:13:57.361 "num_blocks": 131072, 00:13:57.361 "uuid": "5295ad06-4b42-4656-8710-7f02ae0d9e42", 00:13:57.361 "assigned_rate_limits": { 00:13:57.361 "rw_ios_per_sec": 0, 00:13:57.361 "rw_mbytes_per_sec": 0, 00:13:57.361 "r_mbytes_per_sec": 0, 00:13:57.361 "w_mbytes_per_sec": 0 00:13:57.361 }, 00:13:57.361 "claimed": false, 00:13:57.361 "zoned": false, 00:13:57.361 "supported_io_types": { 00:13:57.361 "read": true, 00:13:57.361 "write": true, 00:13:57.361 "unmap": false, 00:13:57.361 "flush": false, 00:13:57.361 "reset": true, 00:13:57.361 "nvme_admin": false, 00:13:57.361 "nvme_io": false, 00:13:57.361 "nvme_io_md": false, 00:13:57.361 "write_zeroes": true, 00:13:57.361 "zcopy": false, 00:13:57.361 "get_zone_info": false, 00:13:57.361 "zone_management": false, 00:13:57.361 "zone_append": false, 00:13:57.361 "compare": false, 00:13:57.361 "compare_and_write": false, 00:13:57.361 "abort": false, 00:13:57.361 "seek_hole": false, 00:13:57.361 "seek_data": false, 00:13:57.361 "copy": false, 00:13:57.361 "nvme_iov_md": false 00:13:57.361 }, 00:13:57.361 "driver_specific": { 00:13:57.361 "raid": { 00:13:57.361 "uuid": "5295ad06-4b42-4656-8710-7f02ae0d9e42", 00:13:57.361 "strip_size_kb": 64, 00:13:57.361 "state": "online", 00:13:57.361 "raid_level": "raid5f", 00:13:57.361 "superblock": false, 00:13:57.361 "num_base_bdevs": 3, 00:13:57.361 "num_base_bdevs_discovered": 3, 00:13:57.361 "num_base_bdevs_operational": 3, 00:13:57.361 "base_bdevs_list": [ 00:13:57.361 { 00:13:57.361 "name": "BaseBdev1", 00:13:57.361 "uuid": "30d02871-a58e-4a9f-a94f-394ce627664e", 00:13:57.361 "is_configured": true, 00:13:57.361 "data_offset": 0, 00:13:57.361 "data_size": 65536 00:13:57.361 }, 00:13:57.361 { 00:13:57.361 "name": "BaseBdev2", 00:13:57.361 "uuid": "590fbb08-bda0-4093-93dc-aa6debad4824", 00:13:57.361 "is_configured": true, 00:13:57.361 "data_offset": 0, 00:13:57.361 "data_size": 65536 00:13:57.361 }, 00:13:57.361 { 00:13:57.361 "name": "BaseBdev3", 00:13:57.361 "uuid": "d9ef8e30-c066-4254-a55c-177c611ed8ba", 00:13:57.361 "is_configured": true, 00:13:57.361 "data_offset": 0, 00:13:57.361 "data_size": 65536 00:13:57.361 } 00:13:57.361 ] 00:13:57.361 } 00:13:57.361 } 00:13:57.361 }' 00:13:57.362 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:57.362 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:57.362 BaseBdev2 00:13:57.362 BaseBdev3' 00:13:57.362 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.621 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:57.621 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:57.621 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:57.621 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.621 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.621 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.621 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.621 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:57.621 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:57.621 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:57.621 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:57.621 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.621 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.621 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.622 [2024-11-21 04:11:57.501923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.622 "name": "Existed_Raid", 00:13:57.622 "uuid": "5295ad06-4b42-4656-8710-7f02ae0d9e42", 00:13:57.622 "strip_size_kb": 64, 00:13:57.622 "state": "online", 00:13:57.622 "raid_level": "raid5f", 00:13:57.622 "superblock": false, 00:13:57.622 "num_base_bdevs": 3, 00:13:57.622 "num_base_bdevs_discovered": 2, 00:13:57.622 "num_base_bdevs_operational": 2, 00:13:57.622 "base_bdevs_list": [ 00:13:57.622 { 00:13:57.622 "name": null, 00:13:57.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.622 "is_configured": false, 00:13:57.622 "data_offset": 0, 00:13:57.622 "data_size": 65536 00:13:57.622 }, 00:13:57.622 { 00:13:57.622 "name": "BaseBdev2", 00:13:57.622 "uuid": "590fbb08-bda0-4093-93dc-aa6debad4824", 00:13:57.622 "is_configured": true, 00:13:57.622 "data_offset": 0, 00:13:57.622 "data_size": 65536 00:13:57.622 }, 00:13:57.622 { 00:13:57.622 "name": "BaseBdev3", 00:13:57.622 "uuid": "d9ef8e30-c066-4254-a55c-177c611ed8ba", 00:13:57.622 "is_configured": true, 00:13:57.622 "data_offset": 0, 00:13:57.622 "data_size": 65536 00:13:57.622 } 00:13:57.622 ] 00:13:57.622 }' 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.622 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.191 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:58.191 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:58.191 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.191 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:58.191 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.191 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.191 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.191 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:58.191 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:58.191 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:58.191 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.191 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.191 [2024-11-21 04:11:57.950025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:58.191 [2024-11-21 04:11:57.950147] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:58.191 [2024-11-21 04:11:57.970459] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:58.191 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.191 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:58.191 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:58.191 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.191 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.191 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.191 04:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:58.191 04:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.191 [2024-11-21 04:11:58.030371] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:58.191 [2024-11-21 04:11:58.030434] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.191 BaseBdev2 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.191 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.191 [ 00:13:58.191 { 00:13:58.191 "name": "BaseBdev2", 00:13:58.191 "aliases": [ 00:13:58.191 "e468f957-7f82-4046-b9af-558e84052523" 00:13:58.191 ], 00:13:58.191 "product_name": "Malloc disk", 00:13:58.191 "block_size": 512, 00:13:58.191 "num_blocks": 65536, 00:13:58.191 "uuid": "e468f957-7f82-4046-b9af-558e84052523", 00:13:58.191 "assigned_rate_limits": { 00:13:58.191 "rw_ios_per_sec": 0, 00:13:58.191 "rw_mbytes_per_sec": 0, 00:13:58.191 "r_mbytes_per_sec": 0, 00:13:58.192 "w_mbytes_per_sec": 0 00:13:58.192 }, 00:13:58.192 "claimed": false, 00:13:58.192 "zoned": false, 00:13:58.192 "supported_io_types": { 00:13:58.192 "read": true, 00:13:58.192 "write": true, 00:13:58.192 "unmap": true, 00:13:58.192 "flush": true, 00:13:58.192 "reset": true, 00:13:58.192 "nvme_admin": false, 00:13:58.192 "nvme_io": false, 00:13:58.192 "nvme_io_md": false, 00:13:58.192 "write_zeroes": true, 00:13:58.192 "zcopy": true, 00:13:58.192 "get_zone_info": false, 00:13:58.192 "zone_management": false, 00:13:58.192 "zone_append": false, 00:13:58.192 "compare": false, 00:13:58.192 "compare_and_write": false, 00:13:58.192 "abort": true, 00:13:58.192 "seek_hole": false, 00:13:58.192 "seek_data": false, 00:13:58.192 "copy": true, 00:13:58.192 "nvme_iov_md": false 00:13:58.192 }, 00:13:58.192 "memory_domains": [ 00:13:58.192 { 00:13:58.192 "dma_device_id": "system", 00:13:58.192 "dma_device_type": 1 00:13:58.192 }, 00:13:58.192 { 00:13:58.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.192 "dma_device_type": 2 00:13:58.192 } 00:13:58.192 ], 00:13:58.192 "driver_specific": {} 00:13:58.192 } 00:13:58.192 ] 00:13:58.192 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.192 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:58.192 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:58.192 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:58.192 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:58.192 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.192 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.452 BaseBdev3 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.452 [ 00:13:58.452 { 00:13:58.452 "name": "BaseBdev3", 00:13:58.452 "aliases": [ 00:13:58.452 "feb1a370-6dcc-4c86-b5fb-ed9fd386836a" 00:13:58.452 ], 00:13:58.452 "product_name": "Malloc disk", 00:13:58.452 "block_size": 512, 00:13:58.452 "num_blocks": 65536, 00:13:58.452 "uuid": "feb1a370-6dcc-4c86-b5fb-ed9fd386836a", 00:13:58.452 "assigned_rate_limits": { 00:13:58.452 "rw_ios_per_sec": 0, 00:13:58.452 "rw_mbytes_per_sec": 0, 00:13:58.452 "r_mbytes_per_sec": 0, 00:13:58.452 "w_mbytes_per_sec": 0 00:13:58.452 }, 00:13:58.452 "claimed": false, 00:13:58.452 "zoned": false, 00:13:58.452 "supported_io_types": { 00:13:58.452 "read": true, 00:13:58.452 "write": true, 00:13:58.452 "unmap": true, 00:13:58.452 "flush": true, 00:13:58.452 "reset": true, 00:13:58.452 "nvme_admin": false, 00:13:58.452 "nvme_io": false, 00:13:58.452 "nvme_io_md": false, 00:13:58.452 "write_zeroes": true, 00:13:58.452 "zcopy": true, 00:13:58.452 "get_zone_info": false, 00:13:58.452 "zone_management": false, 00:13:58.452 "zone_append": false, 00:13:58.452 "compare": false, 00:13:58.452 "compare_and_write": false, 00:13:58.452 "abort": true, 00:13:58.452 "seek_hole": false, 00:13:58.452 "seek_data": false, 00:13:58.452 "copy": true, 00:13:58.452 "nvme_iov_md": false 00:13:58.452 }, 00:13:58.452 "memory_domains": [ 00:13:58.452 { 00:13:58.452 "dma_device_id": "system", 00:13:58.452 "dma_device_type": 1 00:13:58.452 }, 00:13:58.452 { 00:13:58.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.452 "dma_device_type": 2 00:13:58.452 } 00:13:58.452 ], 00:13:58.452 "driver_specific": {} 00:13:58.452 } 00:13:58.452 ] 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.452 [2024-11-21 04:11:58.218514] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:58.452 [2024-11-21 04:11:58.218572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:58.452 [2024-11-21 04:11:58.218609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:58.452 [2024-11-21 04:11:58.220731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.452 "name": "Existed_Raid", 00:13:58.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.452 "strip_size_kb": 64, 00:13:58.452 "state": "configuring", 00:13:58.452 "raid_level": "raid5f", 00:13:58.452 "superblock": false, 00:13:58.452 "num_base_bdevs": 3, 00:13:58.452 "num_base_bdevs_discovered": 2, 00:13:58.452 "num_base_bdevs_operational": 3, 00:13:58.452 "base_bdevs_list": [ 00:13:58.452 { 00:13:58.452 "name": "BaseBdev1", 00:13:58.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.452 "is_configured": false, 00:13:58.452 "data_offset": 0, 00:13:58.452 "data_size": 0 00:13:58.452 }, 00:13:58.452 { 00:13:58.452 "name": "BaseBdev2", 00:13:58.452 "uuid": "e468f957-7f82-4046-b9af-558e84052523", 00:13:58.452 "is_configured": true, 00:13:58.452 "data_offset": 0, 00:13:58.452 "data_size": 65536 00:13:58.452 }, 00:13:58.452 { 00:13:58.452 "name": "BaseBdev3", 00:13:58.452 "uuid": "feb1a370-6dcc-4c86-b5fb-ed9fd386836a", 00:13:58.452 "is_configured": true, 00:13:58.452 "data_offset": 0, 00:13:58.452 "data_size": 65536 00:13:58.452 } 00:13:58.452 ] 00:13:58.452 }' 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.452 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.711 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:58.711 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.711 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.711 [2024-11-21 04:11:58.621802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:58.711 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.711 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:58.711 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.711 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.711 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:58.711 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.711 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.711 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.711 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.711 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.711 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.711 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.711 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.711 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.711 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.711 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.711 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.711 "name": "Existed_Raid", 00:13:58.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.712 "strip_size_kb": 64, 00:13:58.712 "state": "configuring", 00:13:58.712 "raid_level": "raid5f", 00:13:58.712 "superblock": false, 00:13:58.712 "num_base_bdevs": 3, 00:13:58.712 "num_base_bdevs_discovered": 1, 00:13:58.712 "num_base_bdevs_operational": 3, 00:13:58.712 "base_bdevs_list": [ 00:13:58.712 { 00:13:58.712 "name": "BaseBdev1", 00:13:58.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.712 "is_configured": false, 00:13:58.712 "data_offset": 0, 00:13:58.712 "data_size": 0 00:13:58.712 }, 00:13:58.712 { 00:13:58.712 "name": null, 00:13:58.712 "uuid": "e468f957-7f82-4046-b9af-558e84052523", 00:13:58.712 "is_configured": false, 00:13:58.712 "data_offset": 0, 00:13:58.712 "data_size": 65536 00:13:58.712 }, 00:13:58.712 { 00:13:58.712 "name": "BaseBdev3", 00:13:58.712 "uuid": "feb1a370-6dcc-4c86-b5fb-ed9fd386836a", 00:13:58.712 "is_configured": true, 00:13:58.712 "data_offset": 0, 00:13:58.712 "data_size": 65536 00:13:58.712 } 00:13:58.712 ] 00:13:58.712 }' 00:13:58.712 04:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.712 04:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.281 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:59.281 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.281 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.282 [2024-11-21 04:11:59.109718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:59.282 BaseBdev1 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.282 [ 00:13:59.282 { 00:13:59.282 "name": "BaseBdev1", 00:13:59.282 "aliases": [ 00:13:59.282 "4ae8951d-2192-4dc5-8df6-4f323cb04835" 00:13:59.282 ], 00:13:59.282 "product_name": "Malloc disk", 00:13:59.282 "block_size": 512, 00:13:59.282 "num_blocks": 65536, 00:13:59.282 "uuid": "4ae8951d-2192-4dc5-8df6-4f323cb04835", 00:13:59.282 "assigned_rate_limits": { 00:13:59.282 "rw_ios_per_sec": 0, 00:13:59.282 "rw_mbytes_per_sec": 0, 00:13:59.282 "r_mbytes_per_sec": 0, 00:13:59.282 "w_mbytes_per_sec": 0 00:13:59.282 }, 00:13:59.282 "claimed": true, 00:13:59.282 "claim_type": "exclusive_write", 00:13:59.282 "zoned": false, 00:13:59.282 "supported_io_types": { 00:13:59.282 "read": true, 00:13:59.282 "write": true, 00:13:59.282 "unmap": true, 00:13:59.282 "flush": true, 00:13:59.282 "reset": true, 00:13:59.282 "nvme_admin": false, 00:13:59.282 "nvme_io": false, 00:13:59.282 "nvme_io_md": false, 00:13:59.282 "write_zeroes": true, 00:13:59.282 "zcopy": true, 00:13:59.282 "get_zone_info": false, 00:13:59.282 "zone_management": false, 00:13:59.282 "zone_append": false, 00:13:59.282 "compare": false, 00:13:59.282 "compare_and_write": false, 00:13:59.282 "abort": true, 00:13:59.282 "seek_hole": false, 00:13:59.282 "seek_data": false, 00:13:59.282 "copy": true, 00:13:59.282 "nvme_iov_md": false 00:13:59.282 }, 00:13:59.282 "memory_domains": [ 00:13:59.282 { 00:13:59.282 "dma_device_id": "system", 00:13:59.282 "dma_device_type": 1 00:13:59.282 }, 00:13:59.282 { 00:13:59.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.282 "dma_device_type": 2 00:13:59.282 } 00:13:59.282 ], 00:13:59.282 "driver_specific": {} 00:13:59.282 } 00:13:59.282 ] 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.282 "name": "Existed_Raid", 00:13:59.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.282 "strip_size_kb": 64, 00:13:59.282 "state": "configuring", 00:13:59.282 "raid_level": "raid5f", 00:13:59.282 "superblock": false, 00:13:59.282 "num_base_bdevs": 3, 00:13:59.282 "num_base_bdevs_discovered": 2, 00:13:59.282 "num_base_bdevs_operational": 3, 00:13:59.282 "base_bdevs_list": [ 00:13:59.282 { 00:13:59.282 "name": "BaseBdev1", 00:13:59.282 "uuid": "4ae8951d-2192-4dc5-8df6-4f323cb04835", 00:13:59.282 "is_configured": true, 00:13:59.282 "data_offset": 0, 00:13:59.282 "data_size": 65536 00:13:59.282 }, 00:13:59.282 { 00:13:59.282 "name": null, 00:13:59.282 "uuid": "e468f957-7f82-4046-b9af-558e84052523", 00:13:59.282 "is_configured": false, 00:13:59.282 "data_offset": 0, 00:13:59.282 "data_size": 65536 00:13:59.282 }, 00:13:59.282 { 00:13:59.282 "name": "BaseBdev3", 00:13:59.282 "uuid": "feb1a370-6dcc-4c86-b5fb-ed9fd386836a", 00:13:59.282 "is_configured": true, 00:13:59.282 "data_offset": 0, 00:13:59.282 "data_size": 65536 00:13:59.282 } 00:13:59.282 ] 00:13:59.282 }' 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.282 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.852 [2024-11-21 04:11:59.652835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.852 "name": "Existed_Raid", 00:13:59.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.852 "strip_size_kb": 64, 00:13:59.852 "state": "configuring", 00:13:59.852 "raid_level": "raid5f", 00:13:59.852 "superblock": false, 00:13:59.852 "num_base_bdevs": 3, 00:13:59.852 "num_base_bdevs_discovered": 1, 00:13:59.852 "num_base_bdevs_operational": 3, 00:13:59.852 "base_bdevs_list": [ 00:13:59.852 { 00:13:59.852 "name": "BaseBdev1", 00:13:59.852 "uuid": "4ae8951d-2192-4dc5-8df6-4f323cb04835", 00:13:59.852 "is_configured": true, 00:13:59.852 "data_offset": 0, 00:13:59.852 "data_size": 65536 00:13:59.852 }, 00:13:59.852 { 00:13:59.852 "name": null, 00:13:59.852 "uuid": "e468f957-7f82-4046-b9af-558e84052523", 00:13:59.852 "is_configured": false, 00:13:59.852 "data_offset": 0, 00:13:59.852 "data_size": 65536 00:13:59.852 }, 00:13:59.852 { 00:13:59.852 "name": null, 00:13:59.852 "uuid": "feb1a370-6dcc-4c86-b5fb-ed9fd386836a", 00:13:59.852 "is_configured": false, 00:13:59.852 "data_offset": 0, 00:13:59.852 "data_size": 65536 00:13:59.852 } 00:13:59.852 ] 00:13:59.852 }' 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.852 04:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.422 [2024-11-21 04:12:00.151979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.422 "name": "Existed_Raid", 00:14:00.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.422 "strip_size_kb": 64, 00:14:00.422 "state": "configuring", 00:14:00.422 "raid_level": "raid5f", 00:14:00.422 "superblock": false, 00:14:00.422 "num_base_bdevs": 3, 00:14:00.422 "num_base_bdevs_discovered": 2, 00:14:00.422 "num_base_bdevs_operational": 3, 00:14:00.422 "base_bdevs_list": [ 00:14:00.422 { 00:14:00.422 "name": "BaseBdev1", 00:14:00.422 "uuid": "4ae8951d-2192-4dc5-8df6-4f323cb04835", 00:14:00.422 "is_configured": true, 00:14:00.422 "data_offset": 0, 00:14:00.422 "data_size": 65536 00:14:00.422 }, 00:14:00.422 { 00:14:00.422 "name": null, 00:14:00.422 "uuid": "e468f957-7f82-4046-b9af-558e84052523", 00:14:00.422 "is_configured": false, 00:14:00.422 "data_offset": 0, 00:14:00.422 "data_size": 65536 00:14:00.422 }, 00:14:00.422 { 00:14:00.422 "name": "BaseBdev3", 00:14:00.422 "uuid": "feb1a370-6dcc-4c86-b5fb-ed9fd386836a", 00:14:00.422 "is_configured": true, 00:14:00.422 "data_offset": 0, 00:14:00.422 "data_size": 65536 00:14:00.422 } 00:14:00.422 ] 00:14:00.422 }' 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.422 04:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.682 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.682 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:00.682 04:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.682 04:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.682 04:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.682 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:00.682 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:00.682 04:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.682 04:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.682 [2024-11-21 04:12:00.631254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:00.682 04:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.682 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:00.682 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.682 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.682 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.682 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.682 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.682 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.682 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.942 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.942 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.942 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.942 04:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.942 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.942 04:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.942 04:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.942 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.942 "name": "Existed_Raid", 00:14:00.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.942 "strip_size_kb": 64, 00:14:00.942 "state": "configuring", 00:14:00.942 "raid_level": "raid5f", 00:14:00.942 "superblock": false, 00:14:00.942 "num_base_bdevs": 3, 00:14:00.942 "num_base_bdevs_discovered": 1, 00:14:00.942 "num_base_bdevs_operational": 3, 00:14:00.942 "base_bdevs_list": [ 00:14:00.942 { 00:14:00.942 "name": null, 00:14:00.942 "uuid": "4ae8951d-2192-4dc5-8df6-4f323cb04835", 00:14:00.942 "is_configured": false, 00:14:00.942 "data_offset": 0, 00:14:00.942 "data_size": 65536 00:14:00.942 }, 00:14:00.942 { 00:14:00.942 "name": null, 00:14:00.942 "uuid": "e468f957-7f82-4046-b9af-558e84052523", 00:14:00.942 "is_configured": false, 00:14:00.942 "data_offset": 0, 00:14:00.942 "data_size": 65536 00:14:00.942 }, 00:14:00.942 { 00:14:00.942 "name": "BaseBdev3", 00:14:00.942 "uuid": "feb1a370-6dcc-4c86-b5fb-ed9fd386836a", 00:14:00.942 "is_configured": true, 00:14:00.942 "data_offset": 0, 00:14:00.942 "data_size": 65536 00:14:00.942 } 00:14:00.942 ] 00:14:00.942 }' 00:14:00.942 04:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.942 04:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.202 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:01.202 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.202 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.202 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.202 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.202 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:01.202 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:01.202 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.202 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.202 [2024-11-21 04:12:01.158094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:01.202 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.202 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:01.202 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.202 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.202 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.202 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.202 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.202 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.202 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.202 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.202 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.202 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.202 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.202 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.202 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.461 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.461 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.461 "name": "Existed_Raid", 00:14:01.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.461 "strip_size_kb": 64, 00:14:01.461 "state": "configuring", 00:14:01.461 "raid_level": "raid5f", 00:14:01.461 "superblock": false, 00:14:01.461 "num_base_bdevs": 3, 00:14:01.461 "num_base_bdevs_discovered": 2, 00:14:01.461 "num_base_bdevs_operational": 3, 00:14:01.461 "base_bdevs_list": [ 00:14:01.461 { 00:14:01.461 "name": null, 00:14:01.461 "uuid": "4ae8951d-2192-4dc5-8df6-4f323cb04835", 00:14:01.461 "is_configured": false, 00:14:01.461 "data_offset": 0, 00:14:01.461 "data_size": 65536 00:14:01.461 }, 00:14:01.461 { 00:14:01.461 "name": "BaseBdev2", 00:14:01.461 "uuid": "e468f957-7f82-4046-b9af-558e84052523", 00:14:01.461 "is_configured": true, 00:14:01.461 "data_offset": 0, 00:14:01.461 "data_size": 65536 00:14:01.461 }, 00:14:01.461 { 00:14:01.461 "name": "BaseBdev3", 00:14:01.461 "uuid": "feb1a370-6dcc-4c86-b5fb-ed9fd386836a", 00:14:01.461 "is_configured": true, 00:14:01.461 "data_offset": 0, 00:14:01.461 "data_size": 65536 00:14:01.461 } 00:14:01.461 ] 00:14:01.461 }' 00:14:01.461 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.461 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.721 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.721 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:01.721 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.721 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.721 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.721 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:01.721 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.721 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.721 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:01.721 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.721 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.721 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4ae8951d-2192-4dc5-8df6-4f323cb04835 00:14:01.721 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.721 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.981 [2024-11-21 04:12:01.705203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:01.981 [2024-11-21 04:12:01.705294] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:01.981 [2024-11-21 04:12:01.705305] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:01.981 [2024-11-21 04:12:01.705577] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:14:01.981 [2024-11-21 04:12:01.706069] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:01.981 [2024-11-21 04:12:01.706089] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:14:01.981 [2024-11-21 04:12:01.706319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.981 NewBaseBdev 00:14:01.981 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.981 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:01.981 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.982 [ 00:14:01.982 { 00:14:01.982 "name": "NewBaseBdev", 00:14:01.982 "aliases": [ 00:14:01.982 "4ae8951d-2192-4dc5-8df6-4f323cb04835" 00:14:01.982 ], 00:14:01.982 "product_name": "Malloc disk", 00:14:01.982 "block_size": 512, 00:14:01.982 "num_blocks": 65536, 00:14:01.982 "uuid": "4ae8951d-2192-4dc5-8df6-4f323cb04835", 00:14:01.982 "assigned_rate_limits": { 00:14:01.982 "rw_ios_per_sec": 0, 00:14:01.982 "rw_mbytes_per_sec": 0, 00:14:01.982 "r_mbytes_per_sec": 0, 00:14:01.982 "w_mbytes_per_sec": 0 00:14:01.982 }, 00:14:01.982 "claimed": true, 00:14:01.982 "claim_type": "exclusive_write", 00:14:01.982 "zoned": false, 00:14:01.982 "supported_io_types": { 00:14:01.982 "read": true, 00:14:01.982 "write": true, 00:14:01.982 "unmap": true, 00:14:01.982 "flush": true, 00:14:01.982 "reset": true, 00:14:01.982 "nvme_admin": false, 00:14:01.982 "nvme_io": false, 00:14:01.982 "nvme_io_md": false, 00:14:01.982 "write_zeroes": true, 00:14:01.982 "zcopy": true, 00:14:01.982 "get_zone_info": false, 00:14:01.982 "zone_management": false, 00:14:01.982 "zone_append": false, 00:14:01.982 "compare": false, 00:14:01.982 "compare_and_write": false, 00:14:01.982 "abort": true, 00:14:01.982 "seek_hole": false, 00:14:01.982 "seek_data": false, 00:14:01.982 "copy": true, 00:14:01.982 "nvme_iov_md": false 00:14:01.982 }, 00:14:01.982 "memory_domains": [ 00:14:01.982 { 00:14:01.982 "dma_device_id": "system", 00:14:01.982 "dma_device_type": 1 00:14:01.982 }, 00:14:01.982 { 00:14:01.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.982 "dma_device_type": 2 00:14:01.982 } 00:14:01.982 ], 00:14:01.982 "driver_specific": {} 00:14:01.982 } 00:14:01.982 ] 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.982 "name": "Existed_Raid", 00:14:01.982 "uuid": "189c5881-fb9c-486d-8839-e70a3aab64ef", 00:14:01.982 "strip_size_kb": 64, 00:14:01.982 "state": "online", 00:14:01.982 "raid_level": "raid5f", 00:14:01.982 "superblock": false, 00:14:01.982 "num_base_bdevs": 3, 00:14:01.982 "num_base_bdevs_discovered": 3, 00:14:01.982 "num_base_bdevs_operational": 3, 00:14:01.982 "base_bdevs_list": [ 00:14:01.982 { 00:14:01.982 "name": "NewBaseBdev", 00:14:01.982 "uuid": "4ae8951d-2192-4dc5-8df6-4f323cb04835", 00:14:01.982 "is_configured": true, 00:14:01.982 "data_offset": 0, 00:14:01.982 "data_size": 65536 00:14:01.982 }, 00:14:01.982 { 00:14:01.982 "name": "BaseBdev2", 00:14:01.982 "uuid": "e468f957-7f82-4046-b9af-558e84052523", 00:14:01.982 "is_configured": true, 00:14:01.982 "data_offset": 0, 00:14:01.982 "data_size": 65536 00:14:01.982 }, 00:14:01.982 { 00:14:01.982 "name": "BaseBdev3", 00:14:01.982 "uuid": "feb1a370-6dcc-4c86-b5fb-ed9fd386836a", 00:14:01.982 "is_configured": true, 00:14:01.982 "data_offset": 0, 00:14:01.982 "data_size": 65536 00:14:01.982 } 00:14:01.982 ] 00:14:01.982 }' 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.982 04:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.242 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:02.242 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:02.242 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:02.242 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:02.242 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:02.242 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:02.242 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:02.242 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.242 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.242 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:02.242 [2024-11-21 04:12:02.208543] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:02.502 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.502 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:02.502 "name": "Existed_Raid", 00:14:02.502 "aliases": [ 00:14:02.502 "189c5881-fb9c-486d-8839-e70a3aab64ef" 00:14:02.502 ], 00:14:02.502 "product_name": "Raid Volume", 00:14:02.502 "block_size": 512, 00:14:02.502 "num_blocks": 131072, 00:14:02.502 "uuid": "189c5881-fb9c-486d-8839-e70a3aab64ef", 00:14:02.502 "assigned_rate_limits": { 00:14:02.502 "rw_ios_per_sec": 0, 00:14:02.502 "rw_mbytes_per_sec": 0, 00:14:02.502 "r_mbytes_per_sec": 0, 00:14:02.502 "w_mbytes_per_sec": 0 00:14:02.502 }, 00:14:02.502 "claimed": false, 00:14:02.502 "zoned": false, 00:14:02.502 "supported_io_types": { 00:14:02.502 "read": true, 00:14:02.502 "write": true, 00:14:02.502 "unmap": false, 00:14:02.502 "flush": false, 00:14:02.502 "reset": true, 00:14:02.502 "nvme_admin": false, 00:14:02.502 "nvme_io": false, 00:14:02.502 "nvme_io_md": false, 00:14:02.502 "write_zeroes": true, 00:14:02.502 "zcopy": false, 00:14:02.502 "get_zone_info": false, 00:14:02.502 "zone_management": false, 00:14:02.502 "zone_append": false, 00:14:02.502 "compare": false, 00:14:02.502 "compare_and_write": false, 00:14:02.502 "abort": false, 00:14:02.502 "seek_hole": false, 00:14:02.502 "seek_data": false, 00:14:02.502 "copy": false, 00:14:02.502 "nvme_iov_md": false 00:14:02.502 }, 00:14:02.502 "driver_specific": { 00:14:02.502 "raid": { 00:14:02.502 "uuid": "189c5881-fb9c-486d-8839-e70a3aab64ef", 00:14:02.502 "strip_size_kb": 64, 00:14:02.502 "state": "online", 00:14:02.502 "raid_level": "raid5f", 00:14:02.502 "superblock": false, 00:14:02.502 "num_base_bdevs": 3, 00:14:02.502 "num_base_bdevs_discovered": 3, 00:14:02.502 "num_base_bdevs_operational": 3, 00:14:02.502 "base_bdevs_list": [ 00:14:02.502 { 00:14:02.502 "name": "NewBaseBdev", 00:14:02.502 "uuid": "4ae8951d-2192-4dc5-8df6-4f323cb04835", 00:14:02.502 "is_configured": true, 00:14:02.502 "data_offset": 0, 00:14:02.502 "data_size": 65536 00:14:02.502 }, 00:14:02.502 { 00:14:02.502 "name": "BaseBdev2", 00:14:02.502 "uuid": "e468f957-7f82-4046-b9af-558e84052523", 00:14:02.502 "is_configured": true, 00:14:02.502 "data_offset": 0, 00:14:02.502 "data_size": 65536 00:14:02.502 }, 00:14:02.502 { 00:14:02.502 "name": "BaseBdev3", 00:14:02.502 "uuid": "feb1a370-6dcc-4c86-b5fb-ed9fd386836a", 00:14:02.502 "is_configured": true, 00:14:02.502 "data_offset": 0, 00:14:02.502 "data_size": 65536 00:14:02.502 } 00:14:02.502 ] 00:14:02.502 } 00:14:02.502 } 00:14:02.502 }' 00:14:02.502 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:02.502 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:02.502 BaseBdev2 00:14:02.502 BaseBdev3' 00:14:02.502 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.502 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:02.502 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.502 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.502 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:02.502 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.502 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.502 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.502 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.502 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.502 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.502 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.502 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:02.502 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.502 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.502 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.502 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.502 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.502 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.502 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:02.502 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.502 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.503 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.503 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.503 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.503 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.503 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:02.503 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.503 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.503 [2024-11-21 04:12:02.468068] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:02.503 [2024-11-21 04:12:02.468141] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:02.503 [2024-11-21 04:12:02.468277] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:02.503 [2024-11-21 04:12:02.468593] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:02.503 [2024-11-21 04:12:02.468662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:14:02.503 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.503 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 90500 00:14:02.762 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 90500 ']' 00:14:02.762 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 90500 00:14:02.762 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:02.762 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:02.762 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90500 00:14:02.762 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:02.762 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:02.762 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90500' 00:14:02.762 killing process with pid 90500 00:14:02.762 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 90500 00:14:02.762 [2024-11-21 04:12:02.506271] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:02.762 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 90500 00:14:02.762 [2024-11-21 04:12:02.562439] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:03.022 00:14:03.022 real 0m8.825s 00:14:03.022 user 0m14.778s 00:14:03.022 sys 0m1.932s 00:14:03.022 ************************************ 00:14:03.022 END TEST raid5f_state_function_test 00:14:03.022 ************************************ 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.022 04:12:02 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:03.022 04:12:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:03.022 04:12:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.022 04:12:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:03.022 ************************************ 00:14:03.022 START TEST raid5f_state_function_test_sb 00:14:03.022 ************************************ 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=91108 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 91108' 00:14:03.022 Process raid pid: 91108 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 91108 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 91108 ']' 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:03.022 04:12:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.281 [2024-11-21 04:12:03.051393] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:14:03.281 [2024-11-21 04:12:03.051578] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.281 [2024-11-21 04:12:03.208024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.281 [2024-11-21 04:12:03.246667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.541 [2024-11-21 04:12:03.321925] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:03.541 [2024-11-21 04:12:03.322070] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.111 04:12:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.111 04:12:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:04.111 04:12:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:04.111 04:12:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.111 04:12:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.111 [2024-11-21 04:12:03.893001] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:04.111 [2024-11-21 04:12:03.893165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:04.111 [2024-11-21 04:12:03.893210] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:04.111 [2024-11-21 04:12:03.893273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:04.111 [2024-11-21 04:12:03.893338] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:04.111 [2024-11-21 04:12:03.893380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:04.111 04:12:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.111 04:12:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:04.111 04:12:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.111 04:12:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.111 04:12:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:04.111 04:12:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.111 04:12:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.111 04:12:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.111 04:12:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.111 04:12:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.111 04:12:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.111 04:12:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.111 04:12:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.111 04:12:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.111 04:12:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.111 04:12:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.111 04:12:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.111 "name": "Existed_Raid", 00:14:04.111 "uuid": "830f4c27-02db-460d-a2d1-893ddd0d85b8", 00:14:04.111 "strip_size_kb": 64, 00:14:04.111 "state": "configuring", 00:14:04.111 "raid_level": "raid5f", 00:14:04.111 "superblock": true, 00:14:04.111 "num_base_bdevs": 3, 00:14:04.111 "num_base_bdevs_discovered": 0, 00:14:04.111 "num_base_bdevs_operational": 3, 00:14:04.111 "base_bdevs_list": [ 00:14:04.111 { 00:14:04.111 "name": "BaseBdev1", 00:14:04.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.111 "is_configured": false, 00:14:04.111 "data_offset": 0, 00:14:04.111 "data_size": 0 00:14:04.111 }, 00:14:04.111 { 00:14:04.111 "name": "BaseBdev2", 00:14:04.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.111 "is_configured": false, 00:14:04.111 "data_offset": 0, 00:14:04.111 "data_size": 0 00:14:04.111 }, 00:14:04.111 { 00:14:04.111 "name": "BaseBdev3", 00:14:04.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.111 "is_configured": false, 00:14:04.111 "data_offset": 0, 00:14:04.111 "data_size": 0 00:14:04.111 } 00:14:04.111 ] 00:14:04.111 }' 00:14:04.111 04:12:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.111 04:12:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.681 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:04.681 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.681 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.681 [2024-11-21 04:12:04.368057] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:04.681 [2024-11-21 04:12:04.368169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:14:04.681 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.681 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:04.681 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.681 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.681 [2024-11-21 04:12:04.380074] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:04.681 [2024-11-21 04:12:04.380162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:04.681 [2024-11-21 04:12:04.380188] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:04.681 [2024-11-21 04:12:04.380211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:04.681 [2024-11-21 04:12:04.380238] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:04.681 [2024-11-21 04:12:04.380260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:04.681 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.681 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:04.681 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.681 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.681 [2024-11-21 04:12:04.406970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:04.681 BaseBdev1 00:14:04.681 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.681 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:04.681 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:04.681 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:04.681 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:04.681 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:04.681 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:04.681 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:04.681 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.681 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.681 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.681 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:04.681 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.681 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.681 [ 00:14:04.681 { 00:14:04.681 "name": "BaseBdev1", 00:14:04.681 "aliases": [ 00:14:04.681 "921e8951-c381-4d93-ad40-8479230afe53" 00:14:04.681 ], 00:14:04.681 "product_name": "Malloc disk", 00:14:04.681 "block_size": 512, 00:14:04.681 "num_blocks": 65536, 00:14:04.681 "uuid": "921e8951-c381-4d93-ad40-8479230afe53", 00:14:04.681 "assigned_rate_limits": { 00:14:04.681 "rw_ios_per_sec": 0, 00:14:04.681 "rw_mbytes_per_sec": 0, 00:14:04.681 "r_mbytes_per_sec": 0, 00:14:04.682 "w_mbytes_per_sec": 0 00:14:04.682 }, 00:14:04.682 "claimed": true, 00:14:04.682 "claim_type": "exclusive_write", 00:14:04.682 "zoned": false, 00:14:04.682 "supported_io_types": { 00:14:04.682 "read": true, 00:14:04.682 "write": true, 00:14:04.682 "unmap": true, 00:14:04.682 "flush": true, 00:14:04.682 "reset": true, 00:14:04.682 "nvme_admin": false, 00:14:04.682 "nvme_io": false, 00:14:04.682 "nvme_io_md": false, 00:14:04.682 "write_zeroes": true, 00:14:04.682 "zcopy": true, 00:14:04.682 "get_zone_info": false, 00:14:04.682 "zone_management": false, 00:14:04.682 "zone_append": false, 00:14:04.682 "compare": false, 00:14:04.682 "compare_and_write": false, 00:14:04.682 "abort": true, 00:14:04.682 "seek_hole": false, 00:14:04.682 "seek_data": false, 00:14:04.682 "copy": true, 00:14:04.682 "nvme_iov_md": false 00:14:04.682 }, 00:14:04.682 "memory_domains": [ 00:14:04.682 { 00:14:04.682 "dma_device_id": "system", 00:14:04.682 "dma_device_type": 1 00:14:04.682 }, 00:14:04.682 { 00:14:04.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.682 "dma_device_type": 2 00:14:04.682 } 00:14:04.682 ], 00:14:04.682 "driver_specific": {} 00:14:04.682 } 00:14:04.682 ] 00:14:04.682 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.682 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:04.682 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:04.682 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.682 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.682 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:04.682 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.682 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.682 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.682 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.682 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.682 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.682 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.682 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.682 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.682 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.682 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.682 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.682 "name": "Existed_Raid", 00:14:04.682 "uuid": "08b6afa5-d655-4674-a64a-a0ae48a083d4", 00:14:04.682 "strip_size_kb": 64, 00:14:04.682 "state": "configuring", 00:14:04.682 "raid_level": "raid5f", 00:14:04.682 "superblock": true, 00:14:04.682 "num_base_bdevs": 3, 00:14:04.682 "num_base_bdevs_discovered": 1, 00:14:04.682 "num_base_bdevs_operational": 3, 00:14:04.682 "base_bdevs_list": [ 00:14:04.682 { 00:14:04.682 "name": "BaseBdev1", 00:14:04.682 "uuid": "921e8951-c381-4d93-ad40-8479230afe53", 00:14:04.682 "is_configured": true, 00:14:04.682 "data_offset": 2048, 00:14:04.682 "data_size": 63488 00:14:04.682 }, 00:14:04.682 { 00:14:04.682 "name": "BaseBdev2", 00:14:04.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.682 "is_configured": false, 00:14:04.682 "data_offset": 0, 00:14:04.682 "data_size": 0 00:14:04.682 }, 00:14:04.682 { 00:14:04.682 "name": "BaseBdev3", 00:14:04.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.682 "is_configured": false, 00:14:04.682 "data_offset": 0, 00:14:04.682 "data_size": 0 00:14:04.682 } 00:14:04.682 ] 00:14:04.682 }' 00:14:04.682 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.682 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.942 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:04.942 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.942 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.942 [2024-11-21 04:12:04.866192] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:04.942 [2024-11-21 04:12:04.866287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:14:04.942 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.942 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:04.942 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.942 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.942 [2024-11-21 04:12:04.878215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:04.942 [2024-11-21 04:12:04.880350] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:04.942 [2024-11-21 04:12:04.880390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:04.942 [2024-11-21 04:12:04.880399] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:04.942 [2024-11-21 04:12:04.880410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:04.942 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.942 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:04.942 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:04.942 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:04.942 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.942 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.942 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:04.942 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.942 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.942 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.942 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.943 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.943 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.943 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.943 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.943 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.943 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.943 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.202 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.202 "name": "Existed_Raid", 00:14:05.202 "uuid": "c4ddeb04-bf29-4dd1-b4b0-389e44bae61a", 00:14:05.202 "strip_size_kb": 64, 00:14:05.202 "state": "configuring", 00:14:05.202 "raid_level": "raid5f", 00:14:05.202 "superblock": true, 00:14:05.202 "num_base_bdevs": 3, 00:14:05.202 "num_base_bdevs_discovered": 1, 00:14:05.202 "num_base_bdevs_operational": 3, 00:14:05.202 "base_bdevs_list": [ 00:14:05.202 { 00:14:05.202 "name": "BaseBdev1", 00:14:05.202 "uuid": "921e8951-c381-4d93-ad40-8479230afe53", 00:14:05.202 "is_configured": true, 00:14:05.202 "data_offset": 2048, 00:14:05.202 "data_size": 63488 00:14:05.202 }, 00:14:05.202 { 00:14:05.202 "name": "BaseBdev2", 00:14:05.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.202 "is_configured": false, 00:14:05.202 "data_offset": 0, 00:14:05.202 "data_size": 0 00:14:05.202 }, 00:14:05.202 { 00:14:05.202 "name": "BaseBdev3", 00:14:05.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.202 "is_configured": false, 00:14:05.202 "data_offset": 0, 00:14:05.202 "data_size": 0 00:14:05.202 } 00:14:05.202 ] 00:14:05.202 }' 00:14:05.202 04:12:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.203 04:12:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.467 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:05.467 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.467 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.468 [2024-11-21 04:12:05.354006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:05.468 BaseBdev2 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.468 [ 00:14:05.468 { 00:14:05.468 "name": "BaseBdev2", 00:14:05.468 "aliases": [ 00:14:05.468 "a6d729b7-8b6c-4146-8175-2019e339e315" 00:14:05.468 ], 00:14:05.468 "product_name": "Malloc disk", 00:14:05.468 "block_size": 512, 00:14:05.468 "num_blocks": 65536, 00:14:05.468 "uuid": "a6d729b7-8b6c-4146-8175-2019e339e315", 00:14:05.468 "assigned_rate_limits": { 00:14:05.468 "rw_ios_per_sec": 0, 00:14:05.468 "rw_mbytes_per_sec": 0, 00:14:05.468 "r_mbytes_per_sec": 0, 00:14:05.468 "w_mbytes_per_sec": 0 00:14:05.468 }, 00:14:05.468 "claimed": true, 00:14:05.468 "claim_type": "exclusive_write", 00:14:05.468 "zoned": false, 00:14:05.468 "supported_io_types": { 00:14:05.468 "read": true, 00:14:05.468 "write": true, 00:14:05.468 "unmap": true, 00:14:05.468 "flush": true, 00:14:05.468 "reset": true, 00:14:05.468 "nvme_admin": false, 00:14:05.468 "nvme_io": false, 00:14:05.468 "nvme_io_md": false, 00:14:05.468 "write_zeroes": true, 00:14:05.468 "zcopy": true, 00:14:05.468 "get_zone_info": false, 00:14:05.468 "zone_management": false, 00:14:05.468 "zone_append": false, 00:14:05.468 "compare": false, 00:14:05.468 "compare_and_write": false, 00:14:05.468 "abort": true, 00:14:05.468 "seek_hole": false, 00:14:05.468 "seek_data": false, 00:14:05.468 "copy": true, 00:14:05.468 "nvme_iov_md": false 00:14:05.468 }, 00:14:05.468 "memory_domains": [ 00:14:05.468 { 00:14:05.468 "dma_device_id": "system", 00:14:05.468 "dma_device_type": 1 00:14:05.468 }, 00:14:05.468 { 00:14:05.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.468 "dma_device_type": 2 00:14:05.468 } 00:14:05.468 ], 00:14:05.468 "driver_specific": {} 00:14:05.468 } 00:14:05.468 ] 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.468 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.729 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.729 "name": "Existed_Raid", 00:14:05.729 "uuid": "c4ddeb04-bf29-4dd1-b4b0-389e44bae61a", 00:14:05.729 "strip_size_kb": 64, 00:14:05.729 "state": "configuring", 00:14:05.729 "raid_level": "raid5f", 00:14:05.729 "superblock": true, 00:14:05.729 "num_base_bdevs": 3, 00:14:05.729 "num_base_bdevs_discovered": 2, 00:14:05.729 "num_base_bdevs_operational": 3, 00:14:05.729 "base_bdevs_list": [ 00:14:05.729 { 00:14:05.729 "name": "BaseBdev1", 00:14:05.729 "uuid": "921e8951-c381-4d93-ad40-8479230afe53", 00:14:05.729 "is_configured": true, 00:14:05.729 "data_offset": 2048, 00:14:05.729 "data_size": 63488 00:14:05.729 }, 00:14:05.729 { 00:14:05.729 "name": "BaseBdev2", 00:14:05.729 "uuid": "a6d729b7-8b6c-4146-8175-2019e339e315", 00:14:05.729 "is_configured": true, 00:14:05.729 "data_offset": 2048, 00:14:05.729 "data_size": 63488 00:14:05.729 }, 00:14:05.729 { 00:14:05.729 "name": "BaseBdev3", 00:14:05.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.729 "is_configured": false, 00:14:05.729 "data_offset": 0, 00:14:05.729 "data_size": 0 00:14:05.729 } 00:14:05.729 ] 00:14:05.729 }' 00:14:05.729 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.729 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.989 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.990 [2024-11-21 04:12:05.879499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:05.990 [2024-11-21 04:12:05.880254] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:05.990 [2024-11-21 04:12:05.880434] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:05.990 BaseBdev3 00:14:05.990 [2024-11-21 04:12:05.881506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:05.990 [2024-11-21 04:12:05.883291] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:05.990 [2024-11-21 04:12:05.883340] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:05.990 [2024-11-21 04:12:05.883762] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.990 [ 00:14:05.990 { 00:14:05.990 "name": "BaseBdev3", 00:14:05.990 "aliases": [ 00:14:05.990 "67b4c7d7-02f7-44db-bc06-9b7334b5fbc1" 00:14:05.990 ], 00:14:05.990 "product_name": "Malloc disk", 00:14:05.990 "block_size": 512, 00:14:05.990 "num_blocks": 65536, 00:14:05.990 "uuid": "67b4c7d7-02f7-44db-bc06-9b7334b5fbc1", 00:14:05.990 "assigned_rate_limits": { 00:14:05.990 "rw_ios_per_sec": 0, 00:14:05.990 "rw_mbytes_per_sec": 0, 00:14:05.990 "r_mbytes_per_sec": 0, 00:14:05.990 "w_mbytes_per_sec": 0 00:14:05.990 }, 00:14:05.990 "claimed": true, 00:14:05.990 "claim_type": "exclusive_write", 00:14:05.990 "zoned": false, 00:14:05.990 "supported_io_types": { 00:14:05.990 "read": true, 00:14:05.990 "write": true, 00:14:05.990 "unmap": true, 00:14:05.990 "flush": true, 00:14:05.990 "reset": true, 00:14:05.990 "nvme_admin": false, 00:14:05.990 "nvme_io": false, 00:14:05.990 "nvme_io_md": false, 00:14:05.990 "write_zeroes": true, 00:14:05.990 "zcopy": true, 00:14:05.990 "get_zone_info": false, 00:14:05.990 "zone_management": false, 00:14:05.990 "zone_append": false, 00:14:05.990 "compare": false, 00:14:05.990 "compare_and_write": false, 00:14:05.990 "abort": true, 00:14:05.990 "seek_hole": false, 00:14:05.990 "seek_data": false, 00:14:05.990 "copy": true, 00:14:05.990 "nvme_iov_md": false 00:14:05.990 }, 00:14:05.990 "memory_domains": [ 00:14:05.990 { 00:14:05.990 "dma_device_id": "system", 00:14:05.990 "dma_device_type": 1 00:14:05.990 }, 00:14:05.990 { 00:14:05.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.990 "dma_device_type": 2 00:14:05.990 } 00:14:05.990 ], 00:14:05.990 "driver_specific": {} 00:14:05.990 } 00:14:05.990 ] 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.990 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.250 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.250 "name": "Existed_Raid", 00:14:06.250 "uuid": "c4ddeb04-bf29-4dd1-b4b0-389e44bae61a", 00:14:06.250 "strip_size_kb": 64, 00:14:06.250 "state": "online", 00:14:06.250 "raid_level": "raid5f", 00:14:06.250 "superblock": true, 00:14:06.251 "num_base_bdevs": 3, 00:14:06.251 "num_base_bdevs_discovered": 3, 00:14:06.251 "num_base_bdevs_operational": 3, 00:14:06.251 "base_bdevs_list": [ 00:14:06.251 { 00:14:06.251 "name": "BaseBdev1", 00:14:06.251 "uuid": "921e8951-c381-4d93-ad40-8479230afe53", 00:14:06.251 "is_configured": true, 00:14:06.251 "data_offset": 2048, 00:14:06.251 "data_size": 63488 00:14:06.251 }, 00:14:06.251 { 00:14:06.251 "name": "BaseBdev2", 00:14:06.251 "uuid": "a6d729b7-8b6c-4146-8175-2019e339e315", 00:14:06.251 "is_configured": true, 00:14:06.251 "data_offset": 2048, 00:14:06.251 "data_size": 63488 00:14:06.251 }, 00:14:06.251 { 00:14:06.251 "name": "BaseBdev3", 00:14:06.251 "uuid": "67b4c7d7-02f7-44db-bc06-9b7334b5fbc1", 00:14:06.251 "is_configured": true, 00:14:06.251 "data_offset": 2048, 00:14:06.251 "data_size": 63488 00:14:06.251 } 00:14:06.251 ] 00:14:06.251 }' 00:14:06.251 04:12:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.251 04:12:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.511 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:06.511 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:06.511 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:06.511 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:06.511 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:06.511 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:06.511 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:06.511 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:06.511 04:12:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.511 04:12:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.511 [2024-11-21 04:12:06.386949] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.511 04:12:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.511 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:06.511 "name": "Existed_Raid", 00:14:06.511 "aliases": [ 00:14:06.511 "c4ddeb04-bf29-4dd1-b4b0-389e44bae61a" 00:14:06.511 ], 00:14:06.511 "product_name": "Raid Volume", 00:14:06.511 "block_size": 512, 00:14:06.511 "num_blocks": 126976, 00:14:06.511 "uuid": "c4ddeb04-bf29-4dd1-b4b0-389e44bae61a", 00:14:06.511 "assigned_rate_limits": { 00:14:06.511 "rw_ios_per_sec": 0, 00:14:06.511 "rw_mbytes_per_sec": 0, 00:14:06.511 "r_mbytes_per_sec": 0, 00:14:06.511 "w_mbytes_per_sec": 0 00:14:06.511 }, 00:14:06.511 "claimed": false, 00:14:06.511 "zoned": false, 00:14:06.511 "supported_io_types": { 00:14:06.511 "read": true, 00:14:06.511 "write": true, 00:14:06.511 "unmap": false, 00:14:06.511 "flush": false, 00:14:06.511 "reset": true, 00:14:06.511 "nvme_admin": false, 00:14:06.511 "nvme_io": false, 00:14:06.511 "nvme_io_md": false, 00:14:06.511 "write_zeroes": true, 00:14:06.511 "zcopy": false, 00:14:06.511 "get_zone_info": false, 00:14:06.511 "zone_management": false, 00:14:06.511 "zone_append": false, 00:14:06.511 "compare": false, 00:14:06.511 "compare_and_write": false, 00:14:06.511 "abort": false, 00:14:06.511 "seek_hole": false, 00:14:06.511 "seek_data": false, 00:14:06.511 "copy": false, 00:14:06.511 "nvme_iov_md": false 00:14:06.511 }, 00:14:06.511 "driver_specific": { 00:14:06.511 "raid": { 00:14:06.511 "uuid": "c4ddeb04-bf29-4dd1-b4b0-389e44bae61a", 00:14:06.511 "strip_size_kb": 64, 00:14:06.511 "state": "online", 00:14:06.511 "raid_level": "raid5f", 00:14:06.511 "superblock": true, 00:14:06.511 "num_base_bdevs": 3, 00:14:06.511 "num_base_bdevs_discovered": 3, 00:14:06.511 "num_base_bdevs_operational": 3, 00:14:06.511 "base_bdevs_list": [ 00:14:06.511 { 00:14:06.511 "name": "BaseBdev1", 00:14:06.511 "uuid": "921e8951-c381-4d93-ad40-8479230afe53", 00:14:06.511 "is_configured": true, 00:14:06.511 "data_offset": 2048, 00:14:06.511 "data_size": 63488 00:14:06.511 }, 00:14:06.511 { 00:14:06.511 "name": "BaseBdev2", 00:14:06.511 "uuid": "a6d729b7-8b6c-4146-8175-2019e339e315", 00:14:06.511 "is_configured": true, 00:14:06.511 "data_offset": 2048, 00:14:06.511 "data_size": 63488 00:14:06.511 }, 00:14:06.511 { 00:14:06.511 "name": "BaseBdev3", 00:14:06.511 "uuid": "67b4c7d7-02f7-44db-bc06-9b7334b5fbc1", 00:14:06.511 "is_configured": true, 00:14:06.511 "data_offset": 2048, 00:14:06.511 "data_size": 63488 00:14:06.511 } 00:14:06.511 ] 00:14:06.511 } 00:14:06.511 } 00:14:06.511 }' 00:14:06.512 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:06.512 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:06.512 BaseBdev2 00:14:06.512 BaseBdev3' 00:14:06.512 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.772 [2024-11-21 04:12:06.686297] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.772 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.773 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:06.773 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.773 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.773 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.773 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.773 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.773 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.773 04:12:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.773 04:12:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.773 04:12:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.033 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.033 "name": "Existed_Raid", 00:14:07.033 "uuid": "c4ddeb04-bf29-4dd1-b4b0-389e44bae61a", 00:14:07.033 "strip_size_kb": 64, 00:14:07.033 "state": "online", 00:14:07.033 "raid_level": "raid5f", 00:14:07.033 "superblock": true, 00:14:07.033 "num_base_bdevs": 3, 00:14:07.033 "num_base_bdevs_discovered": 2, 00:14:07.033 "num_base_bdevs_operational": 2, 00:14:07.033 "base_bdevs_list": [ 00:14:07.033 { 00:14:07.033 "name": null, 00:14:07.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.033 "is_configured": false, 00:14:07.033 "data_offset": 0, 00:14:07.033 "data_size": 63488 00:14:07.033 }, 00:14:07.033 { 00:14:07.033 "name": "BaseBdev2", 00:14:07.033 "uuid": "a6d729b7-8b6c-4146-8175-2019e339e315", 00:14:07.033 "is_configured": true, 00:14:07.033 "data_offset": 2048, 00:14:07.033 "data_size": 63488 00:14:07.033 }, 00:14:07.033 { 00:14:07.033 "name": "BaseBdev3", 00:14:07.033 "uuid": "67b4c7d7-02f7-44db-bc06-9b7334b5fbc1", 00:14:07.033 "is_configured": true, 00:14:07.033 "data_offset": 2048, 00:14:07.033 "data_size": 63488 00:14:07.033 } 00:14:07.033 ] 00:14:07.033 }' 00:14:07.033 04:12:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.033 04:12:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.293 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:07.293 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:07.293 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.293 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.293 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.293 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:07.293 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.293 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:07.293 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:07.293 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:07.293 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.293 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.293 [2024-11-21 04:12:07.233902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:07.293 [2024-11-21 04:12:07.234115] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:07.293 [2024-11-21 04:12:07.254369] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:07.293 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.293 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:07.293 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:07.293 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:07.293 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.293 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.293 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.553 [2024-11-21 04:12:07.302312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:07.553 [2024-11-21 04:12:07.302397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.553 BaseBdev2 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.553 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.553 [ 00:14:07.553 { 00:14:07.553 "name": "BaseBdev2", 00:14:07.553 "aliases": [ 00:14:07.553 "f883febf-ef65-4950-98d4-8422d40a3f14" 00:14:07.553 ], 00:14:07.553 "product_name": "Malloc disk", 00:14:07.553 "block_size": 512, 00:14:07.553 "num_blocks": 65536, 00:14:07.553 "uuid": "f883febf-ef65-4950-98d4-8422d40a3f14", 00:14:07.553 "assigned_rate_limits": { 00:14:07.553 "rw_ios_per_sec": 0, 00:14:07.553 "rw_mbytes_per_sec": 0, 00:14:07.553 "r_mbytes_per_sec": 0, 00:14:07.553 "w_mbytes_per_sec": 0 00:14:07.553 }, 00:14:07.553 "claimed": false, 00:14:07.553 "zoned": false, 00:14:07.553 "supported_io_types": { 00:14:07.553 "read": true, 00:14:07.553 "write": true, 00:14:07.553 "unmap": true, 00:14:07.553 "flush": true, 00:14:07.553 "reset": true, 00:14:07.554 "nvme_admin": false, 00:14:07.554 "nvme_io": false, 00:14:07.554 "nvme_io_md": false, 00:14:07.554 "write_zeroes": true, 00:14:07.554 "zcopy": true, 00:14:07.554 "get_zone_info": false, 00:14:07.554 "zone_management": false, 00:14:07.554 "zone_append": false, 00:14:07.554 "compare": false, 00:14:07.554 "compare_and_write": false, 00:14:07.554 "abort": true, 00:14:07.554 "seek_hole": false, 00:14:07.554 "seek_data": false, 00:14:07.554 "copy": true, 00:14:07.554 "nvme_iov_md": false 00:14:07.554 }, 00:14:07.554 "memory_domains": [ 00:14:07.554 { 00:14:07.554 "dma_device_id": "system", 00:14:07.554 "dma_device_type": 1 00:14:07.554 }, 00:14:07.554 { 00:14:07.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.554 "dma_device_type": 2 00:14:07.554 } 00:14:07.554 ], 00:14:07.554 "driver_specific": {} 00:14:07.554 } 00:14:07.554 ] 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.554 BaseBdev3 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.554 [ 00:14:07.554 { 00:14:07.554 "name": "BaseBdev3", 00:14:07.554 "aliases": [ 00:14:07.554 "668ea4b5-55eb-4488-93a5-bd9439bcd8e2" 00:14:07.554 ], 00:14:07.554 "product_name": "Malloc disk", 00:14:07.554 "block_size": 512, 00:14:07.554 "num_blocks": 65536, 00:14:07.554 "uuid": "668ea4b5-55eb-4488-93a5-bd9439bcd8e2", 00:14:07.554 "assigned_rate_limits": { 00:14:07.554 "rw_ios_per_sec": 0, 00:14:07.554 "rw_mbytes_per_sec": 0, 00:14:07.554 "r_mbytes_per_sec": 0, 00:14:07.554 "w_mbytes_per_sec": 0 00:14:07.554 }, 00:14:07.554 "claimed": false, 00:14:07.554 "zoned": false, 00:14:07.554 "supported_io_types": { 00:14:07.554 "read": true, 00:14:07.554 "write": true, 00:14:07.554 "unmap": true, 00:14:07.554 "flush": true, 00:14:07.554 "reset": true, 00:14:07.554 "nvme_admin": false, 00:14:07.554 "nvme_io": false, 00:14:07.554 "nvme_io_md": false, 00:14:07.554 "write_zeroes": true, 00:14:07.554 "zcopy": true, 00:14:07.554 "get_zone_info": false, 00:14:07.554 "zone_management": false, 00:14:07.554 "zone_append": false, 00:14:07.554 "compare": false, 00:14:07.554 "compare_and_write": false, 00:14:07.554 "abort": true, 00:14:07.554 "seek_hole": false, 00:14:07.554 "seek_data": false, 00:14:07.554 "copy": true, 00:14:07.554 "nvme_iov_md": false 00:14:07.554 }, 00:14:07.554 "memory_domains": [ 00:14:07.554 { 00:14:07.554 "dma_device_id": "system", 00:14:07.554 "dma_device_type": 1 00:14:07.554 }, 00:14:07.554 { 00:14:07.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.554 "dma_device_type": 2 00:14:07.554 } 00:14:07.554 ], 00:14:07.554 "driver_specific": {} 00:14:07.554 } 00:14:07.554 ] 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.554 [2024-11-21 04:12:07.492276] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:07.554 [2024-11-21 04:12:07.492378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:07.554 [2024-11-21 04:12:07.492417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:07.554 [2024-11-21 04:12:07.494519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.554 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.814 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.814 "name": "Existed_Raid", 00:14:07.814 "uuid": "23faf3a7-5dbe-4066-9f26-56a8f243e496", 00:14:07.814 "strip_size_kb": 64, 00:14:07.814 "state": "configuring", 00:14:07.814 "raid_level": "raid5f", 00:14:07.814 "superblock": true, 00:14:07.814 "num_base_bdevs": 3, 00:14:07.814 "num_base_bdevs_discovered": 2, 00:14:07.814 "num_base_bdevs_operational": 3, 00:14:07.814 "base_bdevs_list": [ 00:14:07.814 { 00:14:07.814 "name": "BaseBdev1", 00:14:07.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.814 "is_configured": false, 00:14:07.814 "data_offset": 0, 00:14:07.814 "data_size": 0 00:14:07.814 }, 00:14:07.814 { 00:14:07.814 "name": "BaseBdev2", 00:14:07.814 "uuid": "f883febf-ef65-4950-98d4-8422d40a3f14", 00:14:07.814 "is_configured": true, 00:14:07.814 "data_offset": 2048, 00:14:07.814 "data_size": 63488 00:14:07.814 }, 00:14:07.814 { 00:14:07.814 "name": "BaseBdev3", 00:14:07.814 "uuid": "668ea4b5-55eb-4488-93a5-bd9439bcd8e2", 00:14:07.814 "is_configured": true, 00:14:07.814 "data_offset": 2048, 00:14:07.814 "data_size": 63488 00:14:07.814 } 00:14:07.814 ] 00:14:07.814 }' 00:14:07.814 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.814 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.075 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:08.075 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.075 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.075 [2024-11-21 04:12:07.911618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:08.075 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.075 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:08.075 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.075 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.075 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.075 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.075 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.075 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.075 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.075 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.075 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.075 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.075 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.075 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.075 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.075 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.075 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.075 "name": "Existed_Raid", 00:14:08.075 "uuid": "23faf3a7-5dbe-4066-9f26-56a8f243e496", 00:14:08.075 "strip_size_kb": 64, 00:14:08.075 "state": "configuring", 00:14:08.075 "raid_level": "raid5f", 00:14:08.075 "superblock": true, 00:14:08.075 "num_base_bdevs": 3, 00:14:08.075 "num_base_bdevs_discovered": 1, 00:14:08.075 "num_base_bdevs_operational": 3, 00:14:08.075 "base_bdevs_list": [ 00:14:08.075 { 00:14:08.075 "name": "BaseBdev1", 00:14:08.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.075 "is_configured": false, 00:14:08.075 "data_offset": 0, 00:14:08.075 "data_size": 0 00:14:08.075 }, 00:14:08.075 { 00:14:08.075 "name": null, 00:14:08.075 "uuid": "f883febf-ef65-4950-98d4-8422d40a3f14", 00:14:08.075 "is_configured": false, 00:14:08.075 "data_offset": 0, 00:14:08.075 "data_size": 63488 00:14:08.075 }, 00:14:08.075 { 00:14:08.075 "name": "BaseBdev3", 00:14:08.075 "uuid": "668ea4b5-55eb-4488-93a5-bd9439bcd8e2", 00:14:08.075 "is_configured": true, 00:14:08.075 "data_offset": 2048, 00:14:08.075 "data_size": 63488 00:14:08.075 } 00:14:08.075 ] 00:14:08.075 }' 00:14:08.075 04:12:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.075 04:12:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.646 [2024-11-21 04:12:08.363466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:08.646 BaseBdev1 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.646 [ 00:14:08.646 { 00:14:08.646 "name": "BaseBdev1", 00:14:08.646 "aliases": [ 00:14:08.646 "f4ba2bcd-bc8b-4bfa-ac09-7d7badbf8ecf" 00:14:08.646 ], 00:14:08.646 "product_name": "Malloc disk", 00:14:08.646 "block_size": 512, 00:14:08.646 "num_blocks": 65536, 00:14:08.646 "uuid": "f4ba2bcd-bc8b-4bfa-ac09-7d7badbf8ecf", 00:14:08.646 "assigned_rate_limits": { 00:14:08.646 "rw_ios_per_sec": 0, 00:14:08.646 "rw_mbytes_per_sec": 0, 00:14:08.646 "r_mbytes_per_sec": 0, 00:14:08.646 "w_mbytes_per_sec": 0 00:14:08.646 }, 00:14:08.646 "claimed": true, 00:14:08.646 "claim_type": "exclusive_write", 00:14:08.646 "zoned": false, 00:14:08.646 "supported_io_types": { 00:14:08.646 "read": true, 00:14:08.646 "write": true, 00:14:08.646 "unmap": true, 00:14:08.646 "flush": true, 00:14:08.646 "reset": true, 00:14:08.646 "nvme_admin": false, 00:14:08.646 "nvme_io": false, 00:14:08.646 "nvme_io_md": false, 00:14:08.646 "write_zeroes": true, 00:14:08.646 "zcopy": true, 00:14:08.646 "get_zone_info": false, 00:14:08.646 "zone_management": false, 00:14:08.646 "zone_append": false, 00:14:08.646 "compare": false, 00:14:08.646 "compare_and_write": false, 00:14:08.646 "abort": true, 00:14:08.646 "seek_hole": false, 00:14:08.646 "seek_data": false, 00:14:08.646 "copy": true, 00:14:08.646 "nvme_iov_md": false 00:14:08.646 }, 00:14:08.646 "memory_domains": [ 00:14:08.646 { 00:14:08.646 "dma_device_id": "system", 00:14:08.646 "dma_device_type": 1 00:14:08.646 }, 00:14:08.646 { 00:14:08.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.646 "dma_device_type": 2 00:14:08.646 } 00:14:08.646 ], 00:14:08.646 "driver_specific": {} 00:14:08.646 } 00:14:08.646 ] 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.646 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.646 "name": "Existed_Raid", 00:14:08.646 "uuid": "23faf3a7-5dbe-4066-9f26-56a8f243e496", 00:14:08.646 "strip_size_kb": 64, 00:14:08.646 "state": "configuring", 00:14:08.646 "raid_level": "raid5f", 00:14:08.646 "superblock": true, 00:14:08.646 "num_base_bdevs": 3, 00:14:08.646 "num_base_bdevs_discovered": 2, 00:14:08.646 "num_base_bdevs_operational": 3, 00:14:08.646 "base_bdevs_list": [ 00:14:08.646 { 00:14:08.646 "name": "BaseBdev1", 00:14:08.646 "uuid": "f4ba2bcd-bc8b-4bfa-ac09-7d7badbf8ecf", 00:14:08.646 "is_configured": true, 00:14:08.646 "data_offset": 2048, 00:14:08.646 "data_size": 63488 00:14:08.646 }, 00:14:08.646 { 00:14:08.646 "name": null, 00:14:08.647 "uuid": "f883febf-ef65-4950-98d4-8422d40a3f14", 00:14:08.647 "is_configured": false, 00:14:08.647 "data_offset": 0, 00:14:08.647 "data_size": 63488 00:14:08.647 }, 00:14:08.647 { 00:14:08.647 "name": "BaseBdev3", 00:14:08.647 "uuid": "668ea4b5-55eb-4488-93a5-bd9439bcd8e2", 00:14:08.647 "is_configured": true, 00:14:08.647 "data_offset": 2048, 00:14:08.647 "data_size": 63488 00:14:08.647 } 00:14:08.647 ] 00:14:08.647 }' 00:14:08.647 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.647 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.907 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.907 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.907 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.907 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:08.907 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.907 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:08.907 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:08.907 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.907 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.907 [2024-11-21 04:12:08.866643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:08.907 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.907 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:08.907 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.907 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.907 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.907 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.907 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.907 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.907 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.907 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.907 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.167 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.167 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.167 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.167 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.167 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.167 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.167 "name": "Existed_Raid", 00:14:09.167 "uuid": "23faf3a7-5dbe-4066-9f26-56a8f243e496", 00:14:09.167 "strip_size_kb": 64, 00:14:09.167 "state": "configuring", 00:14:09.167 "raid_level": "raid5f", 00:14:09.167 "superblock": true, 00:14:09.167 "num_base_bdevs": 3, 00:14:09.167 "num_base_bdevs_discovered": 1, 00:14:09.167 "num_base_bdevs_operational": 3, 00:14:09.167 "base_bdevs_list": [ 00:14:09.167 { 00:14:09.167 "name": "BaseBdev1", 00:14:09.167 "uuid": "f4ba2bcd-bc8b-4bfa-ac09-7d7badbf8ecf", 00:14:09.167 "is_configured": true, 00:14:09.167 "data_offset": 2048, 00:14:09.167 "data_size": 63488 00:14:09.167 }, 00:14:09.167 { 00:14:09.167 "name": null, 00:14:09.167 "uuid": "f883febf-ef65-4950-98d4-8422d40a3f14", 00:14:09.167 "is_configured": false, 00:14:09.167 "data_offset": 0, 00:14:09.167 "data_size": 63488 00:14:09.167 }, 00:14:09.167 { 00:14:09.167 "name": null, 00:14:09.167 "uuid": "668ea4b5-55eb-4488-93a5-bd9439bcd8e2", 00:14:09.167 "is_configured": false, 00:14:09.167 "data_offset": 0, 00:14:09.167 "data_size": 63488 00:14:09.167 } 00:14:09.167 ] 00:14:09.167 }' 00:14:09.167 04:12:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.167 04:12:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.427 [2024-11-21 04:12:09.273986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.427 "name": "Existed_Raid", 00:14:09.427 "uuid": "23faf3a7-5dbe-4066-9f26-56a8f243e496", 00:14:09.427 "strip_size_kb": 64, 00:14:09.427 "state": "configuring", 00:14:09.427 "raid_level": "raid5f", 00:14:09.427 "superblock": true, 00:14:09.427 "num_base_bdevs": 3, 00:14:09.427 "num_base_bdevs_discovered": 2, 00:14:09.427 "num_base_bdevs_operational": 3, 00:14:09.427 "base_bdevs_list": [ 00:14:09.427 { 00:14:09.427 "name": "BaseBdev1", 00:14:09.427 "uuid": "f4ba2bcd-bc8b-4bfa-ac09-7d7badbf8ecf", 00:14:09.427 "is_configured": true, 00:14:09.427 "data_offset": 2048, 00:14:09.427 "data_size": 63488 00:14:09.427 }, 00:14:09.427 { 00:14:09.427 "name": null, 00:14:09.427 "uuid": "f883febf-ef65-4950-98d4-8422d40a3f14", 00:14:09.427 "is_configured": false, 00:14:09.427 "data_offset": 0, 00:14:09.427 "data_size": 63488 00:14:09.427 }, 00:14:09.427 { 00:14:09.427 "name": "BaseBdev3", 00:14:09.427 "uuid": "668ea4b5-55eb-4488-93a5-bd9439bcd8e2", 00:14:09.427 "is_configured": true, 00:14:09.427 "data_offset": 2048, 00:14:09.427 "data_size": 63488 00:14:09.427 } 00:14:09.427 ] 00:14:09.427 }' 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.427 04:12:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.007 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.007 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:10.007 04:12:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.007 04:12:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.007 04:12:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.007 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:10.007 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:10.007 04:12:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.007 04:12:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.007 [2024-11-21 04:12:09.793119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:10.007 04:12:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.007 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:10.007 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.007 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.007 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:10.007 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.007 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.007 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.007 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.007 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.007 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.007 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.007 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.007 04:12:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.007 04:12:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.007 04:12:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.007 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.008 "name": "Existed_Raid", 00:14:10.008 "uuid": "23faf3a7-5dbe-4066-9f26-56a8f243e496", 00:14:10.008 "strip_size_kb": 64, 00:14:10.008 "state": "configuring", 00:14:10.008 "raid_level": "raid5f", 00:14:10.008 "superblock": true, 00:14:10.008 "num_base_bdevs": 3, 00:14:10.008 "num_base_bdevs_discovered": 1, 00:14:10.008 "num_base_bdevs_operational": 3, 00:14:10.008 "base_bdevs_list": [ 00:14:10.008 { 00:14:10.008 "name": null, 00:14:10.008 "uuid": "f4ba2bcd-bc8b-4bfa-ac09-7d7badbf8ecf", 00:14:10.008 "is_configured": false, 00:14:10.008 "data_offset": 0, 00:14:10.008 "data_size": 63488 00:14:10.008 }, 00:14:10.008 { 00:14:10.008 "name": null, 00:14:10.008 "uuid": "f883febf-ef65-4950-98d4-8422d40a3f14", 00:14:10.008 "is_configured": false, 00:14:10.008 "data_offset": 0, 00:14:10.008 "data_size": 63488 00:14:10.008 }, 00:14:10.008 { 00:14:10.008 "name": "BaseBdev3", 00:14:10.008 "uuid": "668ea4b5-55eb-4488-93a5-bd9439bcd8e2", 00:14:10.008 "is_configured": true, 00:14:10.008 "data_offset": 2048, 00:14:10.008 "data_size": 63488 00:14:10.008 } 00:14:10.008 ] 00:14:10.008 }' 00:14:10.008 04:12:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.008 04:12:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.284 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:10.284 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.284 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.284 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.284 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.284 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:10.284 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:10.284 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.284 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.284 [2024-11-21 04:12:10.256076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.544 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.544 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:10.544 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.544 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.544 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:10.544 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.544 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.544 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.544 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.544 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.544 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.544 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.544 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.544 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.544 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.544 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.544 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.544 "name": "Existed_Raid", 00:14:10.544 "uuid": "23faf3a7-5dbe-4066-9f26-56a8f243e496", 00:14:10.544 "strip_size_kb": 64, 00:14:10.544 "state": "configuring", 00:14:10.544 "raid_level": "raid5f", 00:14:10.544 "superblock": true, 00:14:10.544 "num_base_bdevs": 3, 00:14:10.544 "num_base_bdevs_discovered": 2, 00:14:10.544 "num_base_bdevs_operational": 3, 00:14:10.544 "base_bdevs_list": [ 00:14:10.544 { 00:14:10.544 "name": null, 00:14:10.544 "uuid": "f4ba2bcd-bc8b-4bfa-ac09-7d7badbf8ecf", 00:14:10.544 "is_configured": false, 00:14:10.544 "data_offset": 0, 00:14:10.544 "data_size": 63488 00:14:10.544 }, 00:14:10.544 { 00:14:10.544 "name": "BaseBdev2", 00:14:10.544 "uuid": "f883febf-ef65-4950-98d4-8422d40a3f14", 00:14:10.544 "is_configured": true, 00:14:10.544 "data_offset": 2048, 00:14:10.544 "data_size": 63488 00:14:10.544 }, 00:14:10.544 { 00:14:10.544 "name": "BaseBdev3", 00:14:10.544 "uuid": "668ea4b5-55eb-4488-93a5-bd9439bcd8e2", 00:14:10.544 "is_configured": true, 00:14:10.544 "data_offset": 2048, 00:14:10.544 "data_size": 63488 00:14:10.544 } 00:14:10.544 ] 00:14:10.544 }' 00:14:10.544 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.544 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.804 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.804 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:10.804 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.805 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.805 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.805 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:10.805 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.805 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:10.805 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.805 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.805 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f4ba2bcd-bc8b-4bfa-ac09-7d7badbf8ecf 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.065 [2024-11-21 04:12:10.798953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:11.065 [2024-11-21 04:12:10.799275] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:11.065 [2024-11-21 04:12:10.799332] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:11.065 NewBaseBdev 00:14:11.065 [2024-11-21 04:12:10.799650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:14:11.065 [2024-11-21 04:12:10.800076] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:11.065 [2024-11-21 04:12:10.800141] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:14:11.065 [2024-11-21 04:12:10.800299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.065 [ 00:14:11.065 { 00:14:11.065 "name": "NewBaseBdev", 00:14:11.065 "aliases": [ 00:14:11.065 "f4ba2bcd-bc8b-4bfa-ac09-7d7badbf8ecf" 00:14:11.065 ], 00:14:11.065 "product_name": "Malloc disk", 00:14:11.065 "block_size": 512, 00:14:11.065 "num_blocks": 65536, 00:14:11.065 "uuid": "f4ba2bcd-bc8b-4bfa-ac09-7d7badbf8ecf", 00:14:11.065 "assigned_rate_limits": { 00:14:11.065 "rw_ios_per_sec": 0, 00:14:11.065 "rw_mbytes_per_sec": 0, 00:14:11.065 "r_mbytes_per_sec": 0, 00:14:11.065 "w_mbytes_per_sec": 0 00:14:11.065 }, 00:14:11.065 "claimed": true, 00:14:11.065 "claim_type": "exclusive_write", 00:14:11.065 "zoned": false, 00:14:11.065 "supported_io_types": { 00:14:11.065 "read": true, 00:14:11.065 "write": true, 00:14:11.065 "unmap": true, 00:14:11.065 "flush": true, 00:14:11.065 "reset": true, 00:14:11.065 "nvme_admin": false, 00:14:11.065 "nvme_io": false, 00:14:11.065 "nvme_io_md": false, 00:14:11.065 "write_zeroes": true, 00:14:11.065 "zcopy": true, 00:14:11.065 "get_zone_info": false, 00:14:11.065 "zone_management": false, 00:14:11.065 "zone_append": false, 00:14:11.065 "compare": false, 00:14:11.065 "compare_and_write": false, 00:14:11.065 "abort": true, 00:14:11.065 "seek_hole": false, 00:14:11.065 "seek_data": false, 00:14:11.065 "copy": true, 00:14:11.065 "nvme_iov_md": false 00:14:11.065 }, 00:14:11.065 "memory_domains": [ 00:14:11.065 { 00:14:11.065 "dma_device_id": "system", 00:14:11.065 "dma_device_type": 1 00:14:11.065 }, 00:14:11.065 { 00:14:11.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.065 "dma_device_type": 2 00:14:11.065 } 00:14:11.065 ], 00:14:11.065 "driver_specific": {} 00:14:11.065 } 00:14:11.065 ] 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.065 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.066 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.066 "name": "Existed_Raid", 00:14:11.066 "uuid": "23faf3a7-5dbe-4066-9f26-56a8f243e496", 00:14:11.066 "strip_size_kb": 64, 00:14:11.066 "state": "online", 00:14:11.066 "raid_level": "raid5f", 00:14:11.066 "superblock": true, 00:14:11.066 "num_base_bdevs": 3, 00:14:11.066 "num_base_bdevs_discovered": 3, 00:14:11.066 "num_base_bdevs_operational": 3, 00:14:11.066 "base_bdevs_list": [ 00:14:11.066 { 00:14:11.066 "name": "NewBaseBdev", 00:14:11.066 "uuid": "f4ba2bcd-bc8b-4bfa-ac09-7d7badbf8ecf", 00:14:11.066 "is_configured": true, 00:14:11.066 "data_offset": 2048, 00:14:11.066 "data_size": 63488 00:14:11.066 }, 00:14:11.066 { 00:14:11.066 "name": "BaseBdev2", 00:14:11.066 "uuid": "f883febf-ef65-4950-98d4-8422d40a3f14", 00:14:11.066 "is_configured": true, 00:14:11.066 "data_offset": 2048, 00:14:11.066 "data_size": 63488 00:14:11.066 }, 00:14:11.066 { 00:14:11.066 "name": "BaseBdev3", 00:14:11.066 "uuid": "668ea4b5-55eb-4488-93a5-bd9439bcd8e2", 00:14:11.066 "is_configured": true, 00:14:11.066 "data_offset": 2048, 00:14:11.066 "data_size": 63488 00:14:11.066 } 00:14:11.066 ] 00:14:11.066 }' 00:14:11.066 04:12:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.066 04:12:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.326 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:11.326 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:11.326 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:11.326 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:11.326 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:11.326 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:11.326 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:11.326 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:11.326 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.326 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.326 [2024-11-21 04:12:11.254427] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:11.326 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.326 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:11.326 "name": "Existed_Raid", 00:14:11.326 "aliases": [ 00:14:11.326 "23faf3a7-5dbe-4066-9f26-56a8f243e496" 00:14:11.326 ], 00:14:11.326 "product_name": "Raid Volume", 00:14:11.326 "block_size": 512, 00:14:11.326 "num_blocks": 126976, 00:14:11.326 "uuid": "23faf3a7-5dbe-4066-9f26-56a8f243e496", 00:14:11.326 "assigned_rate_limits": { 00:14:11.326 "rw_ios_per_sec": 0, 00:14:11.326 "rw_mbytes_per_sec": 0, 00:14:11.326 "r_mbytes_per_sec": 0, 00:14:11.326 "w_mbytes_per_sec": 0 00:14:11.326 }, 00:14:11.326 "claimed": false, 00:14:11.326 "zoned": false, 00:14:11.326 "supported_io_types": { 00:14:11.326 "read": true, 00:14:11.326 "write": true, 00:14:11.326 "unmap": false, 00:14:11.326 "flush": false, 00:14:11.326 "reset": true, 00:14:11.326 "nvme_admin": false, 00:14:11.326 "nvme_io": false, 00:14:11.326 "nvme_io_md": false, 00:14:11.326 "write_zeroes": true, 00:14:11.326 "zcopy": false, 00:14:11.326 "get_zone_info": false, 00:14:11.326 "zone_management": false, 00:14:11.326 "zone_append": false, 00:14:11.326 "compare": false, 00:14:11.326 "compare_and_write": false, 00:14:11.326 "abort": false, 00:14:11.326 "seek_hole": false, 00:14:11.326 "seek_data": false, 00:14:11.326 "copy": false, 00:14:11.326 "nvme_iov_md": false 00:14:11.326 }, 00:14:11.326 "driver_specific": { 00:14:11.326 "raid": { 00:14:11.326 "uuid": "23faf3a7-5dbe-4066-9f26-56a8f243e496", 00:14:11.326 "strip_size_kb": 64, 00:14:11.326 "state": "online", 00:14:11.326 "raid_level": "raid5f", 00:14:11.326 "superblock": true, 00:14:11.326 "num_base_bdevs": 3, 00:14:11.326 "num_base_bdevs_discovered": 3, 00:14:11.326 "num_base_bdevs_operational": 3, 00:14:11.326 "base_bdevs_list": [ 00:14:11.326 { 00:14:11.326 "name": "NewBaseBdev", 00:14:11.326 "uuid": "f4ba2bcd-bc8b-4bfa-ac09-7d7badbf8ecf", 00:14:11.326 "is_configured": true, 00:14:11.326 "data_offset": 2048, 00:14:11.326 "data_size": 63488 00:14:11.326 }, 00:14:11.326 { 00:14:11.326 "name": "BaseBdev2", 00:14:11.326 "uuid": "f883febf-ef65-4950-98d4-8422d40a3f14", 00:14:11.326 "is_configured": true, 00:14:11.326 "data_offset": 2048, 00:14:11.326 "data_size": 63488 00:14:11.326 }, 00:14:11.326 { 00:14:11.326 "name": "BaseBdev3", 00:14:11.326 "uuid": "668ea4b5-55eb-4488-93a5-bd9439bcd8e2", 00:14:11.326 "is_configured": true, 00:14:11.326 "data_offset": 2048, 00:14:11.326 "data_size": 63488 00:14:11.326 } 00:14:11.326 ] 00:14:11.326 } 00:14:11.326 } 00:14:11.326 }' 00:14:11.326 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:11.586 BaseBdev2 00:14:11.586 BaseBdev3' 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.586 [2024-11-21 04:12:11.501805] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:11.586 [2024-11-21 04:12:11.501867] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:11.586 [2024-11-21 04:12:11.501964] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:11.586 [2024-11-21 04:12:11.502266] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:11.586 [2024-11-21 04:12:11.502321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 91108 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 91108 ']' 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 91108 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91108 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:11.586 killing process with pid 91108 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91108' 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 91108 00:14:11.586 [2024-11-21 04:12:11.539852] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:11.586 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 91108 00:14:11.846 [2024-11-21 04:12:11.596488] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:12.106 04:12:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:12.106 00:14:12.106 real 0m8.964s 00:14:12.106 user 0m15.069s 00:14:12.106 sys 0m1.921s 00:14:12.106 ************************************ 00:14:12.106 END TEST raid5f_state_function_test_sb 00:14:12.106 ************************************ 00:14:12.106 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.106 04:12:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.106 04:12:11 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:14:12.106 04:12:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:12.106 04:12:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.106 04:12:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:12.106 ************************************ 00:14:12.106 START TEST raid5f_superblock_test 00:14:12.106 ************************************ 00:14:12.106 04:12:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:14:12.106 04:12:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:12.106 04:12:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:12.106 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:12.106 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:12.106 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:12.106 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:12.106 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:12.106 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:12.106 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:12.106 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:12.106 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:12.106 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:12.106 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:12.106 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:12.106 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:12.106 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:12.106 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=91707 00:14:12.106 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:12.106 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 91707 00:14:12.106 04:12:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 91707 ']' 00:14:12.106 04:12:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.106 04:12:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.106 04:12:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.106 04:12:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.107 04:12:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.366 [2024-11-21 04:12:12.090653] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:14:12.366 [2024-11-21 04:12:12.090773] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91707 ] 00:14:12.366 [2024-11-21 04:12:12.242969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.366 [2024-11-21 04:12:12.282064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.626 [2024-11-21 04:12:12.357736] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.626 [2024-11-21 04:12:12.357781] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.249 malloc1 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.249 [2024-11-21 04:12:12.939967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:13.249 [2024-11-21 04:12:12.940095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.249 [2024-11-21 04:12:12.940128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:13.249 [2024-11-21 04:12:12.940186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.249 [2024-11-21 04:12:12.942663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.249 [2024-11-21 04:12:12.942743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:13.249 pt1 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.249 malloc2 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.249 [2024-11-21 04:12:12.978402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:13.249 [2024-11-21 04:12:12.978501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.249 [2024-11-21 04:12:12.978531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:13.249 [2024-11-21 04:12:12.978558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.249 [2024-11-21 04:12:12.981007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.249 [2024-11-21 04:12:12.981083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:13.249 pt2 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.249 04:12:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.249 malloc3 00:14:13.249 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.249 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:13.249 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.249 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.249 [2024-11-21 04:12:13.012775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:13.249 [2024-11-21 04:12:13.012882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.249 [2024-11-21 04:12:13.012919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:13.249 [2024-11-21 04:12:13.012948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.249 [2024-11-21 04:12:13.015357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.249 [2024-11-21 04:12:13.015426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:13.249 pt3 00:14:13.249 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.249 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:13.249 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:13.249 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:13.249 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.249 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.249 [2024-11-21 04:12:13.024842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:13.249 [2024-11-21 04:12:13.027021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:13.249 [2024-11-21 04:12:13.027114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:13.249 [2024-11-21 04:12:13.027343] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:13.249 [2024-11-21 04:12:13.027360] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:13.249 [2024-11-21 04:12:13.027629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:14:13.249 [2024-11-21 04:12:13.028062] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:13.249 [2024-11-21 04:12:13.028075] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:13.249 [2024-11-21 04:12:13.028249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.249 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.249 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:13.250 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.250 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.250 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.250 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.250 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.250 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.250 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.250 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.250 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.250 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.250 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.250 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.250 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.250 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.250 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.250 "name": "raid_bdev1", 00:14:13.250 "uuid": "7b6c5cc4-cc5f-4a11-aaf6-9302d58c6455", 00:14:13.250 "strip_size_kb": 64, 00:14:13.250 "state": "online", 00:14:13.250 "raid_level": "raid5f", 00:14:13.250 "superblock": true, 00:14:13.250 "num_base_bdevs": 3, 00:14:13.250 "num_base_bdevs_discovered": 3, 00:14:13.250 "num_base_bdevs_operational": 3, 00:14:13.250 "base_bdevs_list": [ 00:14:13.250 { 00:14:13.250 "name": "pt1", 00:14:13.250 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:13.250 "is_configured": true, 00:14:13.250 "data_offset": 2048, 00:14:13.250 "data_size": 63488 00:14:13.250 }, 00:14:13.250 { 00:14:13.250 "name": "pt2", 00:14:13.250 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:13.250 "is_configured": true, 00:14:13.250 "data_offset": 2048, 00:14:13.250 "data_size": 63488 00:14:13.250 }, 00:14:13.250 { 00:14:13.250 "name": "pt3", 00:14:13.250 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:13.250 "is_configured": true, 00:14:13.250 "data_offset": 2048, 00:14:13.250 "data_size": 63488 00:14:13.250 } 00:14:13.250 ] 00:14:13.250 }' 00:14:13.250 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.250 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.827 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:13.827 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:13.827 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:13.827 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:13.827 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:13.827 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:13.827 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:13.827 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:13.827 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.827 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.827 [2024-11-21 04:12:13.513805] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:13.827 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.827 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:13.827 "name": "raid_bdev1", 00:14:13.827 "aliases": [ 00:14:13.827 "7b6c5cc4-cc5f-4a11-aaf6-9302d58c6455" 00:14:13.827 ], 00:14:13.827 "product_name": "Raid Volume", 00:14:13.827 "block_size": 512, 00:14:13.827 "num_blocks": 126976, 00:14:13.827 "uuid": "7b6c5cc4-cc5f-4a11-aaf6-9302d58c6455", 00:14:13.827 "assigned_rate_limits": { 00:14:13.827 "rw_ios_per_sec": 0, 00:14:13.827 "rw_mbytes_per_sec": 0, 00:14:13.827 "r_mbytes_per_sec": 0, 00:14:13.827 "w_mbytes_per_sec": 0 00:14:13.827 }, 00:14:13.827 "claimed": false, 00:14:13.827 "zoned": false, 00:14:13.827 "supported_io_types": { 00:14:13.827 "read": true, 00:14:13.827 "write": true, 00:14:13.827 "unmap": false, 00:14:13.827 "flush": false, 00:14:13.827 "reset": true, 00:14:13.827 "nvme_admin": false, 00:14:13.827 "nvme_io": false, 00:14:13.827 "nvme_io_md": false, 00:14:13.827 "write_zeroes": true, 00:14:13.827 "zcopy": false, 00:14:13.827 "get_zone_info": false, 00:14:13.827 "zone_management": false, 00:14:13.827 "zone_append": false, 00:14:13.827 "compare": false, 00:14:13.827 "compare_and_write": false, 00:14:13.827 "abort": false, 00:14:13.827 "seek_hole": false, 00:14:13.827 "seek_data": false, 00:14:13.827 "copy": false, 00:14:13.827 "nvme_iov_md": false 00:14:13.827 }, 00:14:13.827 "driver_specific": { 00:14:13.827 "raid": { 00:14:13.827 "uuid": "7b6c5cc4-cc5f-4a11-aaf6-9302d58c6455", 00:14:13.827 "strip_size_kb": 64, 00:14:13.827 "state": "online", 00:14:13.827 "raid_level": "raid5f", 00:14:13.827 "superblock": true, 00:14:13.827 "num_base_bdevs": 3, 00:14:13.827 "num_base_bdevs_discovered": 3, 00:14:13.827 "num_base_bdevs_operational": 3, 00:14:13.827 "base_bdevs_list": [ 00:14:13.828 { 00:14:13.828 "name": "pt1", 00:14:13.828 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:13.828 "is_configured": true, 00:14:13.828 "data_offset": 2048, 00:14:13.828 "data_size": 63488 00:14:13.828 }, 00:14:13.828 { 00:14:13.828 "name": "pt2", 00:14:13.828 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:13.828 "is_configured": true, 00:14:13.828 "data_offset": 2048, 00:14:13.828 "data_size": 63488 00:14:13.828 }, 00:14:13.828 { 00:14:13.828 "name": "pt3", 00:14:13.828 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:13.828 "is_configured": true, 00:14:13.828 "data_offset": 2048, 00:14:13.828 "data_size": 63488 00:14:13.828 } 00:14:13.828 ] 00:14:13.828 } 00:14:13.828 } 00:14:13.828 }' 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:13.828 pt2 00:14:13.828 pt3' 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.828 [2024-11-21 04:12:13.757388] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7b6c5cc4-cc5f-4a11-aaf6-9302d58c6455 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7b6c5cc4-cc5f-4a11-aaf6-9302d58c6455 ']' 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.828 [2024-11-21 04:12:13.785186] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:13.828 [2024-11-21 04:12:13.785263] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:13.828 [2024-11-21 04:12:13.785340] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:13.828 [2024-11-21 04:12:13.785407] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:13.828 [2024-11-21 04:12:13.785421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.828 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.088 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.088 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:14.088 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:14.088 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:14.088 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:14.088 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.088 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.088 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.088 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:14.088 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:14.088 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.088 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.088 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.088 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:14.088 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:14.088 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.088 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.088 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.088 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:14.088 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:14.089 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.089 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.089 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.089 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:14.089 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:14.089 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:14.089 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:14.089 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:14.089 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:14.089 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:14.089 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:14.089 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:14.089 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.089 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.089 [2024-11-21 04:12:13.940936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:14.089 [2024-11-21 04:12:13.943098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:14.089 [2024-11-21 04:12:13.943193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:14.089 [2024-11-21 04:12:13.943262] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:14.089 [2024-11-21 04:12:13.943335] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:14.089 [2024-11-21 04:12:13.943408] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:14.089 [2024-11-21 04:12:13.943468] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:14.089 [2024-11-21 04:12:13.943519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:14:14.089 request: 00:14:14.089 { 00:14:14.089 "name": "raid_bdev1", 00:14:14.089 "raid_level": "raid5f", 00:14:14.089 "base_bdevs": [ 00:14:14.089 "malloc1", 00:14:14.089 "malloc2", 00:14:14.089 "malloc3" 00:14:14.089 ], 00:14:14.089 "strip_size_kb": 64, 00:14:14.089 "superblock": false, 00:14:14.089 "method": "bdev_raid_create", 00:14:14.089 "req_id": 1 00:14:14.089 } 00:14:14.089 Got JSON-RPC error response 00:14:14.089 response: 00:14:14.089 { 00:14:14.089 "code": -17, 00:14:14.089 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:14.089 } 00:14:14.089 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:14.089 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:14.089 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:14.089 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:14.089 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:14.089 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.089 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:14.089 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.089 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.089 04:12:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.089 04:12:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:14.089 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:14.089 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:14.089 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.089 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.089 [2024-11-21 04:12:14.008785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:14.089 [2024-11-21 04:12:14.008887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.089 [2024-11-21 04:12:14.008918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:14.089 [2024-11-21 04:12:14.008946] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.089 [2024-11-21 04:12:14.011446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.089 [2024-11-21 04:12:14.011542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:14.089 [2024-11-21 04:12:14.011622] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:14.089 [2024-11-21 04:12:14.011695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:14.089 pt1 00:14:14.089 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.089 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:14.089 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.089 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.089 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.089 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.089 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.089 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.089 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.089 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.089 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.089 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.089 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.089 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.089 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.089 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.349 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.349 "name": "raid_bdev1", 00:14:14.349 "uuid": "7b6c5cc4-cc5f-4a11-aaf6-9302d58c6455", 00:14:14.349 "strip_size_kb": 64, 00:14:14.349 "state": "configuring", 00:14:14.349 "raid_level": "raid5f", 00:14:14.349 "superblock": true, 00:14:14.349 "num_base_bdevs": 3, 00:14:14.349 "num_base_bdevs_discovered": 1, 00:14:14.349 "num_base_bdevs_operational": 3, 00:14:14.349 "base_bdevs_list": [ 00:14:14.349 { 00:14:14.349 "name": "pt1", 00:14:14.349 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:14.349 "is_configured": true, 00:14:14.349 "data_offset": 2048, 00:14:14.349 "data_size": 63488 00:14:14.349 }, 00:14:14.349 { 00:14:14.349 "name": null, 00:14:14.349 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:14.349 "is_configured": false, 00:14:14.349 "data_offset": 2048, 00:14:14.349 "data_size": 63488 00:14:14.349 }, 00:14:14.349 { 00:14:14.349 "name": null, 00:14:14.349 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:14.349 "is_configured": false, 00:14:14.349 "data_offset": 2048, 00:14:14.349 "data_size": 63488 00:14:14.349 } 00:14:14.349 ] 00:14:14.349 }' 00:14:14.349 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.349 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.609 [2024-11-21 04:12:14.472073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:14.609 [2024-11-21 04:12:14.472180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.609 [2024-11-21 04:12:14.472204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:14.609 [2024-11-21 04:12:14.472215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.609 [2024-11-21 04:12:14.472611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.609 [2024-11-21 04:12:14.472632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:14.609 [2024-11-21 04:12:14.472690] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:14.609 [2024-11-21 04:12:14.472711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:14.609 pt2 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.609 [2024-11-21 04:12:14.484069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.609 "name": "raid_bdev1", 00:14:14.609 "uuid": "7b6c5cc4-cc5f-4a11-aaf6-9302d58c6455", 00:14:14.609 "strip_size_kb": 64, 00:14:14.609 "state": "configuring", 00:14:14.609 "raid_level": "raid5f", 00:14:14.609 "superblock": true, 00:14:14.609 "num_base_bdevs": 3, 00:14:14.609 "num_base_bdevs_discovered": 1, 00:14:14.609 "num_base_bdevs_operational": 3, 00:14:14.609 "base_bdevs_list": [ 00:14:14.609 { 00:14:14.609 "name": "pt1", 00:14:14.609 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:14.609 "is_configured": true, 00:14:14.609 "data_offset": 2048, 00:14:14.609 "data_size": 63488 00:14:14.609 }, 00:14:14.609 { 00:14:14.609 "name": null, 00:14:14.609 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:14.609 "is_configured": false, 00:14:14.609 "data_offset": 0, 00:14:14.609 "data_size": 63488 00:14:14.609 }, 00:14:14.609 { 00:14:14.609 "name": null, 00:14:14.609 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:14.609 "is_configured": false, 00:14:14.609 "data_offset": 2048, 00:14:14.609 "data_size": 63488 00:14:14.609 } 00:14:14.609 ] 00:14:14.609 }' 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.609 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.178 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:15.178 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.179 [2024-11-21 04:12:14.939305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:15.179 [2024-11-21 04:12:14.939387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.179 [2024-11-21 04:12:14.939419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:15.179 [2024-11-21 04:12:14.939444] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.179 [2024-11-21 04:12:14.939834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.179 [2024-11-21 04:12:14.939888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:15.179 [2024-11-21 04:12:14.939982] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:15.179 [2024-11-21 04:12:14.940026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:15.179 pt2 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.179 [2024-11-21 04:12:14.951284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:15.179 [2024-11-21 04:12:14.951354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.179 [2024-11-21 04:12:14.951388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:15.179 [2024-11-21 04:12:14.951411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.179 [2024-11-21 04:12:14.951779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.179 [2024-11-21 04:12:14.951830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:15.179 [2024-11-21 04:12:14.951920] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:15.179 [2024-11-21 04:12:14.951963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:15.179 [2024-11-21 04:12:14.952095] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:15.179 [2024-11-21 04:12:14.952157] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:15.179 [2024-11-21 04:12:14.952461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:15.179 [2024-11-21 04:12:14.952877] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:15.179 [2024-11-21 04:12:14.952925] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:14:15.179 [2024-11-21 04:12:14.953072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.179 pt3 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.179 "name": "raid_bdev1", 00:14:15.179 "uuid": "7b6c5cc4-cc5f-4a11-aaf6-9302d58c6455", 00:14:15.179 "strip_size_kb": 64, 00:14:15.179 "state": "online", 00:14:15.179 "raid_level": "raid5f", 00:14:15.179 "superblock": true, 00:14:15.179 "num_base_bdevs": 3, 00:14:15.179 "num_base_bdevs_discovered": 3, 00:14:15.179 "num_base_bdevs_operational": 3, 00:14:15.179 "base_bdevs_list": [ 00:14:15.179 { 00:14:15.179 "name": "pt1", 00:14:15.179 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:15.179 "is_configured": true, 00:14:15.179 "data_offset": 2048, 00:14:15.179 "data_size": 63488 00:14:15.179 }, 00:14:15.179 { 00:14:15.179 "name": "pt2", 00:14:15.179 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:15.179 "is_configured": true, 00:14:15.179 "data_offset": 2048, 00:14:15.179 "data_size": 63488 00:14:15.179 }, 00:14:15.179 { 00:14:15.179 "name": "pt3", 00:14:15.179 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:15.179 "is_configured": true, 00:14:15.179 "data_offset": 2048, 00:14:15.179 "data_size": 63488 00:14:15.179 } 00:14:15.179 ] 00:14:15.179 }' 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.179 04:12:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.439 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:15.439 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:15.439 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:15.439 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:15.439 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:15.699 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:15.699 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:15.699 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:15.699 04:12:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.699 04:12:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.699 [2024-11-21 04:12:15.422648] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:15.699 04:12:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.699 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:15.699 "name": "raid_bdev1", 00:14:15.699 "aliases": [ 00:14:15.699 "7b6c5cc4-cc5f-4a11-aaf6-9302d58c6455" 00:14:15.699 ], 00:14:15.699 "product_name": "Raid Volume", 00:14:15.699 "block_size": 512, 00:14:15.699 "num_blocks": 126976, 00:14:15.699 "uuid": "7b6c5cc4-cc5f-4a11-aaf6-9302d58c6455", 00:14:15.699 "assigned_rate_limits": { 00:14:15.699 "rw_ios_per_sec": 0, 00:14:15.699 "rw_mbytes_per_sec": 0, 00:14:15.699 "r_mbytes_per_sec": 0, 00:14:15.699 "w_mbytes_per_sec": 0 00:14:15.699 }, 00:14:15.699 "claimed": false, 00:14:15.699 "zoned": false, 00:14:15.699 "supported_io_types": { 00:14:15.699 "read": true, 00:14:15.699 "write": true, 00:14:15.699 "unmap": false, 00:14:15.699 "flush": false, 00:14:15.699 "reset": true, 00:14:15.699 "nvme_admin": false, 00:14:15.699 "nvme_io": false, 00:14:15.699 "nvme_io_md": false, 00:14:15.699 "write_zeroes": true, 00:14:15.699 "zcopy": false, 00:14:15.699 "get_zone_info": false, 00:14:15.699 "zone_management": false, 00:14:15.699 "zone_append": false, 00:14:15.699 "compare": false, 00:14:15.699 "compare_and_write": false, 00:14:15.699 "abort": false, 00:14:15.699 "seek_hole": false, 00:14:15.699 "seek_data": false, 00:14:15.699 "copy": false, 00:14:15.699 "nvme_iov_md": false 00:14:15.699 }, 00:14:15.699 "driver_specific": { 00:14:15.699 "raid": { 00:14:15.699 "uuid": "7b6c5cc4-cc5f-4a11-aaf6-9302d58c6455", 00:14:15.699 "strip_size_kb": 64, 00:14:15.699 "state": "online", 00:14:15.699 "raid_level": "raid5f", 00:14:15.699 "superblock": true, 00:14:15.699 "num_base_bdevs": 3, 00:14:15.699 "num_base_bdevs_discovered": 3, 00:14:15.699 "num_base_bdevs_operational": 3, 00:14:15.699 "base_bdevs_list": [ 00:14:15.699 { 00:14:15.699 "name": "pt1", 00:14:15.699 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:15.699 "is_configured": true, 00:14:15.699 "data_offset": 2048, 00:14:15.699 "data_size": 63488 00:14:15.699 }, 00:14:15.699 { 00:14:15.699 "name": "pt2", 00:14:15.699 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:15.699 "is_configured": true, 00:14:15.699 "data_offset": 2048, 00:14:15.699 "data_size": 63488 00:14:15.699 }, 00:14:15.699 { 00:14:15.699 "name": "pt3", 00:14:15.699 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:15.699 "is_configured": true, 00:14:15.699 "data_offset": 2048, 00:14:15.699 "data_size": 63488 00:14:15.699 } 00:14:15.699 ] 00:14:15.699 } 00:14:15.699 } 00:14:15.699 }' 00:14:15.699 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:15.699 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:15.699 pt2 00:14:15.699 pt3' 00:14:15.699 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.699 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:15.699 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:15.699 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:15.699 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.699 04:12:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.699 04:12:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.699 04:12:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.699 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:15.699 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:15.699 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:15.699 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:15.699 04:12:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.699 04:12:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.699 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.700 04:12:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.700 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:15.700 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:15.700 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:15.700 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:15.700 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.700 04:12:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.700 04:12:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.960 [2024-11-21 04:12:15.702128] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7b6c5cc4-cc5f-4a11-aaf6-9302d58c6455 '!=' 7b6c5cc4-cc5f-4a11-aaf6-9302d58c6455 ']' 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.960 [2024-11-21 04:12:15.729975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.960 "name": "raid_bdev1", 00:14:15.960 "uuid": "7b6c5cc4-cc5f-4a11-aaf6-9302d58c6455", 00:14:15.960 "strip_size_kb": 64, 00:14:15.960 "state": "online", 00:14:15.960 "raid_level": "raid5f", 00:14:15.960 "superblock": true, 00:14:15.960 "num_base_bdevs": 3, 00:14:15.960 "num_base_bdevs_discovered": 2, 00:14:15.960 "num_base_bdevs_operational": 2, 00:14:15.960 "base_bdevs_list": [ 00:14:15.960 { 00:14:15.960 "name": null, 00:14:15.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.960 "is_configured": false, 00:14:15.960 "data_offset": 0, 00:14:15.960 "data_size": 63488 00:14:15.960 }, 00:14:15.960 { 00:14:15.960 "name": "pt2", 00:14:15.960 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:15.960 "is_configured": true, 00:14:15.960 "data_offset": 2048, 00:14:15.960 "data_size": 63488 00:14:15.960 }, 00:14:15.960 { 00:14:15.960 "name": "pt3", 00:14:15.960 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:15.960 "is_configured": true, 00:14:15.960 "data_offset": 2048, 00:14:15.960 "data_size": 63488 00:14:15.960 } 00:14:15.960 ] 00:14:15.960 }' 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.960 04:12:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.220 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:16.220 04:12:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.220 04:12:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.220 [2024-11-21 04:12:16.141231] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:16.220 [2024-11-21 04:12:16.141299] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:16.220 [2024-11-21 04:12:16.141367] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:16.220 [2024-11-21 04:12:16.141473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:16.220 [2024-11-21 04:12:16.141526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:14:16.220 04:12:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.220 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.220 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:16.220 04:12:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.220 04:12:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.220 04:12:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.480 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:16.480 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:16.480 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:16.480 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:16.480 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:16.480 04:12:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.480 04:12:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.480 04:12:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.480 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.481 [2024-11-21 04:12:16.229074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:16.481 [2024-11-21 04:12:16.229155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.481 [2024-11-21 04:12:16.229188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:14:16.481 [2024-11-21 04:12:16.229227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.481 [2024-11-21 04:12:16.231739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.481 [2024-11-21 04:12:16.231809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:16.481 [2024-11-21 04:12:16.231897] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:16.481 [2024-11-21 04:12:16.231984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:16.481 pt2 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.481 "name": "raid_bdev1", 00:14:16.481 "uuid": "7b6c5cc4-cc5f-4a11-aaf6-9302d58c6455", 00:14:16.481 "strip_size_kb": 64, 00:14:16.481 "state": "configuring", 00:14:16.481 "raid_level": "raid5f", 00:14:16.481 "superblock": true, 00:14:16.481 "num_base_bdevs": 3, 00:14:16.481 "num_base_bdevs_discovered": 1, 00:14:16.481 "num_base_bdevs_operational": 2, 00:14:16.481 "base_bdevs_list": [ 00:14:16.481 { 00:14:16.481 "name": null, 00:14:16.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.481 "is_configured": false, 00:14:16.481 "data_offset": 2048, 00:14:16.481 "data_size": 63488 00:14:16.481 }, 00:14:16.481 { 00:14:16.481 "name": "pt2", 00:14:16.481 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:16.481 "is_configured": true, 00:14:16.481 "data_offset": 2048, 00:14:16.481 "data_size": 63488 00:14:16.481 }, 00:14:16.481 { 00:14:16.481 "name": null, 00:14:16.481 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:16.481 "is_configured": false, 00:14:16.481 "data_offset": 2048, 00:14:16.481 "data_size": 63488 00:14:16.481 } 00:14:16.481 ] 00:14:16.481 }' 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.481 04:12:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.742 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:16.742 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:16.742 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:16.742 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:16.742 04:12:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.742 04:12:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.742 [2024-11-21 04:12:16.660322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:16.742 [2024-11-21 04:12:16.660411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.742 [2024-11-21 04:12:16.660445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:16.742 [2024-11-21 04:12:16.660470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.742 [2024-11-21 04:12:16.660894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.742 [2024-11-21 04:12:16.660949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:16.742 [2024-11-21 04:12:16.661049] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:16.742 [2024-11-21 04:12:16.661096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:16.742 [2024-11-21 04:12:16.661237] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:16.742 [2024-11-21 04:12:16.661276] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:16.742 [2024-11-21 04:12:16.661563] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:14:16.742 [2024-11-21 04:12:16.662098] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:16.742 [2024-11-21 04:12:16.662151] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:14:16.742 [2024-11-21 04:12:16.662459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.742 pt3 00:14:16.742 04:12:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.742 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:16.742 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.742 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.742 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:16.742 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.742 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:16.742 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.742 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.742 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.742 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.742 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.743 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.743 04:12:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.743 04:12:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.743 04:12:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.002 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.002 "name": "raid_bdev1", 00:14:17.002 "uuid": "7b6c5cc4-cc5f-4a11-aaf6-9302d58c6455", 00:14:17.002 "strip_size_kb": 64, 00:14:17.003 "state": "online", 00:14:17.003 "raid_level": "raid5f", 00:14:17.003 "superblock": true, 00:14:17.003 "num_base_bdevs": 3, 00:14:17.003 "num_base_bdevs_discovered": 2, 00:14:17.003 "num_base_bdevs_operational": 2, 00:14:17.003 "base_bdevs_list": [ 00:14:17.003 { 00:14:17.003 "name": null, 00:14:17.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.003 "is_configured": false, 00:14:17.003 "data_offset": 2048, 00:14:17.003 "data_size": 63488 00:14:17.003 }, 00:14:17.003 { 00:14:17.003 "name": "pt2", 00:14:17.003 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:17.003 "is_configured": true, 00:14:17.003 "data_offset": 2048, 00:14:17.003 "data_size": 63488 00:14:17.003 }, 00:14:17.003 { 00:14:17.003 "name": "pt3", 00:14:17.003 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:17.003 "is_configured": true, 00:14:17.003 "data_offset": 2048, 00:14:17.003 "data_size": 63488 00:14:17.003 } 00:14:17.003 ] 00:14:17.003 }' 00:14:17.003 04:12:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.003 04:12:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.262 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:17.262 04:12:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.262 04:12:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.262 [2024-11-21 04:12:17.088116] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:17.262 [2024-11-21 04:12:17.088187] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:17.262 [2024-11-21 04:12:17.088288] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:17.262 [2024-11-21 04:12:17.088375] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:17.262 [2024-11-21 04:12:17.088428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:14:17.262 04:12:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.262 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.262 04:12:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.262 04:12:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.262 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:17.262 04:12:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.262 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:17.262 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:17.262 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:17.262 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:17.262 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:17.262 04:12:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.262 04:12:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.262 04:12:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.262 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:17.262 04:12:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.262 04:12:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.262 [2024-11-21 04:12:17.159991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:17.262 [2024-11-21 04:12:17.160079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.262 [2024-11-21 04:12:17.160106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:17.262 [2024-11-21 04:12:17.160133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.262 [2024-11-21 04:12:17.162581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.262 [2024-11-21 04:12:17.162668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:17.262 [2024-11-21 04:12:17.162741] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:17.262 [2024-11-21 04:12:17.162798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:17.262 [2024-11-21 04:12:17.162938] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:17.262 [2024-11-21 04:12:17.162995] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:17.262 [2024-11-21 04:12:17.163068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:14:17.262 [2024-11-21 04:12:17.163163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:17.262 pt1 00:14:17.262 04:12:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.263 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:17.263 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:17.263 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.263 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.263 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:17.263 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.263 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:17.263 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.263 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.263 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.263 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.263 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.263 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.263 04:12:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.263 04:12:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.263 04:12:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.263 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.263 "name": "raid_bdev1", 00:14:17.263 "uuid": "7b6c5cc4-cc5f-4a11-aaf6-9302d58c6455", 00:14:17.263 "strip_size_kb": 64, 00:14:17.263 "state": "configuring", 00:14:17.263 "raid_level": "raid5f", 00:14:17.263 "superblock": true, 00:14:17.263 "num_base_bdevs": 3, 00:14:17.263 "num_base_bdevs_discovered": 1, 00:14:17.263 "num_base_bdevs_operational": 2, 00:14:17.263 "base_bdevs_list": [ 00:14:17.263 { 00:14:17.263 "name": null, 00:14:17.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.263 "is_configured": false, 00:14:17.263 "data_offset": 2048, 00:14:17.263 "data_size": 63488 00:14:17.263 }, 00:14:17.263 { 00:14:17.263 "name": "pt2", 00:14:17.263 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:17.263 "is_configured": true, 00:14:17.263 "data_offset": 2048, 00:14:17.263 "data_size": 63488 00:14:17.263 }, 00:14:17.263 { 00:14:17.263 "name": null, 00:14:17.263 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:17.263 "is_configured": false, 00:14:17.263 "data_offset": 2048, 00:14:17.263 "data_size": 63488 00:14:17.263 } 00:14:17.263 ] 00:14:17.263 }' 00:14:17.263 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.263 04:12:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.832 [2024-11-21 04:12:17.679108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:17.832 [2024-11-21 04:12:17.679205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.832 [2024-11-21 04:12:17.679247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:17.832 [2024-11-21 04:12:17.679278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.832 [2024-11-21 04:12:17.679742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.832 [2024-11-21 04:12:17.679811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:17.832 [2024-11-21 04:12:17.679920] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:17.832 [2024-11-21 04:12:17.679985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:17.832 [2024-11-21 04:12:17.680111] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:14:17.832 [2024-11-21 04:12:17.680180] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:17.832 [2024-11-21 04:12:17.680486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:14:17.832 [2024-11-21 04:12:17.681005] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:14:17.832 [2024-11-21 04:12:17.681056] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:14:17.832 [2024-11-21 04:12:17.681313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.832 pt3 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.832 "name": "raid_bdev1", 00:14:17.832 "uuid": "7b6c5cc4-cc5f-4a11-aaf6-9302d58c6455", 00:14:17.832 "strip_size_kb": 64, 00:14:17.832 "state": "online", 00:14:17.832 "raid_level": "raid5f", 00:14:17.832 "superblock": true, 00:14:17.832 "num_base_bdevs": 3, 00:14:17.832 "num_base_bdevs_discovered": 2, 00:14:17.832 "num_base_bdevs_operational": 2, 00:14:17.832 "base_bdevs_list": [ 00:14:17.832 { 00:14:17.832 "name": null, 00:14:17.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.832 "is_configured": false, 00:14:17.832 "data_offset": 2048, 00:14:17.832 "data_size": 63488 00:14:17.832 }, 00:14:17.832 { 00:14:17.832 "name": "pt2", 00:14:17.832 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:17.832 "is_configured": true, 00:14:17.832 "data_offset": 2048, 00:14:17.832 "data_size": 63488 00:14:17.832 }, 00:14:17.832 { 00:14:17.832 "name": "pt3", 00:14:17.832 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:17.832 "is_configured": true, 00:14:17.832 "data_offset": 2048, 00:14:17.832 "data_size": 63488 00:14:17.832 } 00:14:17.832 ] 00:14:17.832 }' 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.832 04:12:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.402 04:12:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:18.402 04:12:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.402 04:12:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:18.402 04:12:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.402 04:12:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.402 04:12:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:18.402 04:12:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:18.402 04:12:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.402 04:12:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.402 04:12:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:18.402 [2024-11-21 04:12:18.210554] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:18.402 04:12:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.402 04:12:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7b6c5cc4-cc5f-4a11-aaf6-9302d58c6455 '!=' 7b6c5cc4-cc5f-4a11-aaf6-9302d58c6455 ']' 00:14:18.402 04:12:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 91707 00:14:18.402 04:12:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 91707 ']' 00:14:18.402 04:12:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 91707 00:14:18.402 04:12:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:18.402 04:12:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:18.402 04:12:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91707 00:14:18.402 04:12:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:18.402 04:12:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:18.402 04:12:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91707' 00:14:18.402 killing process with pid 91707 00:14:18.402 04:12:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 91707 00:14:18.402 [2024-11-21 04:12:18.298131] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:18.402 [2024-11-21 04:12:18.298297] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:18.402 04:12:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 91707 00:14:18.402 [2024-11-21 04:12:18.298401] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:18.402 [2024-11-21 04:12:18.298415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:14:18.402 [2024-11-21 04:12:18.358696] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:18.972 04:12:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:18.972 ************************************ 00:14:18.972 END TEST raid5f_superblock_test 00:14:18.972 00:14:18.972 real 0m6.686s 00:14:18.972 user 0m11.012s 00:14:18.972 sys 0m1.509s 00:14:18.972 04:12:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:18.972 04:12:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.972 ************************************ 00:14:18.972 04:12:18 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:18.972 04:12:18 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:14:18.972 04:12:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:18.973 04:12:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:18.973 04:12:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:18.973 ************************************ 00:14:18.973 START TEST raid5f_rebuild_test 00:14:18.973 ************************************ 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=92140 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 92140 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 92140 ']' 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:18.973 04:12:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.973 [2024-11-21 04:12:18.874837] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:14:18.973 [2024-11-21 04:12:18.875005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:18.973 Zero copy mechanism will not be used. 00:14:18.973 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92140 ] 00:14:19.233 [2024-11-21 04:12:19.031620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.233 [2024-11-21 04:12:19.072543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.233 [2024-11-21 04:12:19.147925] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.233 [2024-11-21 04:12:19.148043] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.803 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:19.803 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:19.803 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.803 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:19.803 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.803 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.803 BaseBdev1_malloc 00:14:19.803 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.803 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:19.803 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.803 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.803 [2024-11-21 04:12:19.713943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:19.803 [2024-11-21 04:12:19.714021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.803 [2024-11-21 04:12:19.714054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:19.803 [2024-11-21 04:12:19.714074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.803 [2024-11-21 04:12:19.716519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.803 [2024-11-21 04:12:19.716553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:19.803 BaseBdev1 00:14:19.803 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.803 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.803 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:19.803 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.803 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.803 BaseBdev2_malloc 00:14:19.803 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.803 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:19.803 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.803 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.803 [2024-11-21 04:12:19.748274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:19.803 [2024-11-21 04:12:19.748386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.803 [2024-11-21 04:12:19.748425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:19.803 [2024-11-21 04:12:19.748452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.803 [2024-11-21 04:12:19.750805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.803 [2024-11-21 04:12:19.750878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:19.803 BaseBdev2 00:14:19.803 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.803 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.803 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:19.803 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.803 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.063 BaseBdev3_malloc 00:14:20.063 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.064 [2024-11-21 04:12:19.782510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:20.064 [2024-11-21 04:12:19.782563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.064 [2024-11-21 04:12:19.782588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:20.064 [2024-11-21 04:12:19.782597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.064 [2024-11-21 04:12:19.785051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.064 [2024-11-21 04:12:19.785085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:20.064 BaseBdev3 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.064 spare_malloc 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.064 spare_delay 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.064 [2024-11-21 04:12:19.843928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:20.064 [2024-11-21 04:12:19.843990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.064 [2024-11-21 04:12:19.844025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:20.064 [2024-11-21 04:12:19.844035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.064 [2024-11-21 04:12:19.846842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.064 [2024-11-21 04:12:19.846954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:20.064 spare 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.064 [2024-11-21 04:12:19.855969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:20.064 [2024-11-21 04:12:19.858121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:20.064 [2024-11-21 04:12:19.858230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:20.064 [2024-11-21 04:12:19.858351] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:20.064 [2024-11-21 04:12:19.858406] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:20.064 [2024-11-21 04:12:19.858696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:20.064 [2024-11-21 04:12:19.859179] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:20.064 [2024-11-21 04:12:19.859237] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:20.064 [2024-11-21 04:12:19.859415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.064 "name": "raid_bdev1", 00:14:20.064 "uuid": "b078e92e-c567-4df7-b401-faa4a978efb8", 00:14:20.064 "strip_size_kb": 64, 00:14:20.064 "state": "online", 00:14:20.064 "raid_level": "raid5f", 00:14:20.064 "superblock": false, 00:14:20.064 "num_base_bdevs": 3, 00:14:20.064 "num_base_bdevs_discovered": 3, 00:14:20.064 "num_base_bdevs_operational": 3, 00:14:20.064 "base_bdevs_list": [ 00:14:20.064 { 00:14:20.064 "name": "BaseBdev1", 00:14:20.064 "uuid": "58947814-bcba-5225-8d87-b1f04afaf3bb", 00:14:20.064 "is_configured": true, 00:14:20.064 "data_offset": 0, 00:14:20.064 "data_size": 65536 00:14:20.064 }, 00:14:20.064 { 00:14:20.064 "name": "BaseBdev2", 00:14:20.064 "uuid": "925c6f2f-0861-58a9-9cfc-21eb254256d2", 00:14:20.064 "is_configured": true, 00:14:20.064 "data_offset": 0, 00:14:20.064 "data_size": 65536 00:14:20.064 }, 00:14:20.064 { 00:14:20.064 "name": "BaseBdev3", 00:14:20.064 "uuid": "61316f01-ef2d-53c7-b90c-69027628da22", 00:14:20.064 "is_configured": true, 00:14:20.064 "data_offset": 0, 00:14:20.064 "data_size": 65536 00:14:20.064 } 00:14:20.064 ] 00:14:20.064 }' 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.064 04:12:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.634 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:20.634 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:20.634 04:12:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.634 04:12:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.634 [2024-11-21 04:12:20.337070] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:20.634 04:12:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.634 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:14:20.634 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.634 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:20.634 04:12:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.634 04:12:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.634 04:12:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.634 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:20.634 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:20.634 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:20.634 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:20.634 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:20.634 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:20.634 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:20.634 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:20.634 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:20.634 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:20.634 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:20.634 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:20.634 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:20.634 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:20.634 [2024-11-21 04:12:20.600460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:14:20.897 /dev/nbd0 00:14:20.897 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:20.897 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:20.897 04:12:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:20.897 04:12:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:20.897 04:12:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:20.897 04:12:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:20.897 04:12:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:20.897 04:12:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:20.897 04:12:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:20.897 04:12:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:20.897 04:12:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:20.897 1+0 records in 00:14:20.897 1+0 records out 00:14:20.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297223 s, 13.8 MB/s 00:14:20.897 04:12:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.897 04:12:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:20.897 04:12:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.897 04:12:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:20.897 04:12:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:20.897 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:20.897 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:20.897 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:20.897 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:20.897 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:20.897 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:14:21.157 512+0 records in 00:14:21.157 512+0 records out 00:14:21.157 67108864 bytes (67 MB, 64 MiB) copied, 0.315883 s, 212 MB/s 00:14:21.157 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:21.157 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:21.157 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:21.157 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:21.157 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:21.157 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:21.157 04:12:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:21.417 [2024-11-21 04:12:21.196934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.417 [2024-11-21 04:12:21.213004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.417 "name": "raid_bdev1", 00:14:21.417 "uuid": "b078e92e-c567-4df7-b401-faa4a978efb8", 00:14:21.417 "strip_size_kb": 64, 00:14:21.417 "state": "online", 00:14:21.417 "raid_level": "raid5f", 00:14:21.417 "superblock": false, 00:14:21.417 "num_base_bdevs": 3, 00:14:21.417 "num_base_bdevs_discovered": 2, 00:14:21.417 "num_base_bdevs_operational": 2, 00:14:21.417 "base_bdevs_list": [ 00:14:21.417 { 00:14:21.417 "name": null, 00:14:21.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.417 "is_configured": false, 00:14:21.417 "data_offset": 0, 00:14:21.417 "data_size": 65536 00:14:21.417 }, 00:14:21.417 { 00:14:21.417 "name": "BaseBdev2", 00:14:21.417 "uuid": "925c6f2f-0861-58a9-9cfc-21eb254256d2", 00:14:21.417 "is_configured": true, 00:14:21.417 "data_offset": 0, 00:14:21.417 "data_size": 65536 00:14:21.417 }, 00:14:21.417 { 00:14:21.417 "name": "BaseBdev3", 00:14:21.417 "uuid": "61316f01-ef2d-53c7-b90c-69027628da22", 00:14:21.417 "is_configured": true, 00:14:21.417 "data_offset": 0, 00:14:21.417 "data_size": 65536 00:14:21.417 } 00:14:21.417 ] 00:14:21.417 }' 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.417 04:12:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.987 04:12:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:21.987 04:12:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.987 04:12:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.987 [2024-11-21 04:12:21.684282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:21.987 [2024-11-21 04:12:21.692203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027cd0 00:14:21.987 04:12:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.987 04:12:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:21.987 [2024-11-21 04:12:21.694713] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:22.928 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.928 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.928 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.928 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.928 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.928 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.928 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.928 04:12:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.928 04:12:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.928 04:12:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.928 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.928 "name": "raid_bdev1", 00:14:22.928 "uuid": "b078e92e-c567-4df7-b401-faa4a978efb8", 00:14:22.928 "strip_size_kb": 64, 00:14:22.928 "state": "online", 00:14:22.928 "raid_level": "raid5f", 00:14:22.928 "superblock": false, 00:14:22.928 "num_base_bdevs": 3, 00:14:22.928 "num_base_bdevs_discovered": 3, 00:14:22.928 "num_base_bdevs_operational": 3, 00:14:22.928 "process": { 00:14:22.928 "type": "rebuild", 00:14:22.928 "target": "spare", 00:14:22.928 "progress": { 00:14:22.928 "blocks": 20480, 00:14:22.928 "percent": 15 00:14:22.928 } 00:14:22.928 }, 00:14:22.928 "base_bdevs_list": [ 00:14:22.928 { 00:14:22.928 "name": "spare", 00:14:22.928 "uuid": "249ea6f8-7d33-5408-bc3a-9fcc9653f8a6", 00:14:22.928 "is_configured": true, 00:14:22.928 "data_offset": 0, 00:14:22.928 "data_size": 65536 00:14:22.928 }, 00:14:22.928 { 00:14:22.928 "name": "BaseBdev2", 00:14:22.928 "uuid": "925c6f2f-0861-58a9-9cfc-21eb254256d2", 00:14:22.928 "is_configured": true, 00:14:22.928 "data_offset": 0, 00:14:22.928 "data_size": 65536 00:14:22.928 }, 00:14:22.928 { 00:14:22.928 "name": "BaseBdev3", 00:14:22.928 "uuid": "61316f01-ef2d-53c7-b90c-69027628da22", 00:14:22.928 "is_configured": true, 00:14:22.928 "data_offset": 0, 00:14:22.928 "data_size": 65536 00:14:22.928 } 00:14:22.928 ] 00:14:22.928 }' 00:14:22.928 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.928 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.928 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.928 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.928 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:22.928 04:12:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.928 04:12:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.928 [2024-11-21 04:12:22.854435] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:23.188 [2024-11-21 04:12:22.903203] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:23.188 [2024-11-21 04:12:22.903304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.188 [2024-11-21 04:12:22.903324] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:23.188 [2024-11-21 04:12:22.903335] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:23.188 04:12:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.188 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:23.188 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.188 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.188 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.188 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.188 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:23.188 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.188 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.188 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.188 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.188 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.188 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.188 04:12:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.188 04:12:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.188 04:12:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.188 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.188 "name": "raid_bdev1", 00:14:23.188 "uuid": "b078e92e-c567-4df7-b401-faa4a978efb8", 00:14:23.188 "strip_size_kb": 64, 00:14:23.188 "state": "online", 00:14:23.188 "raid_level": "raid5f", 00:14:23.188 "superblock": false, 00:14:23.188 "num_base_bdevs": 3, 00:14:23.188 "num_base_bdevs_discovered": 2, 00:14:23.188 "num_base_bdevs_operational": 2, 00:14:23.188 "base_bdevs_list": [ 00:14:23.188 { 00:14:23.188 "name": null, 00:14:23.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.188 "is_configured": false, 00:14:23.188 "data_offset": 0, 00:14:23.188 "data_size": 65536 00:14:23.188 }, 00:14:23.188 { 00:14:23.188 "name": "BaseBdev2", 00:14:23.188 "uuid": "925c6f2f-0861-58a9-9cfc-21eb254256d2", 00:14:23.188 "is_configured": true, 00:14:23.188 "data_offset": 0, 00:14:23.188 "data_size": 65536 00:14:23.188 }, 00:14:23.188 { 00:14:23.188 "name": "BaseBdev3", 00:14:23.188 "uuid": "61316f01-ef2d-53c7-b90c-69027628da22", 00:14:23.188 "is_configured": true, 00:14:23.188 "data_offset": 0, 00:14:23.188 "data_size": 65536 00:14:23.188 } 00:14:23.188 ] 00:14:23.188 }' 00:14:23.188 04:12:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.188 04:12:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.448 04:12:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:23.448 04:12:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.448 04:12:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:23.448 04:12:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:23.448 04:12:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.448 04:12:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.448 04:12:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.448 04:12:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.448 04:12:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.448 04:12:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.448 04:12:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.448 "name": "raid_bdev1", 00:14:23.448 "uuid": "b078e92e-c567-4df7-b401-faa4a978efb8", 00:14:23.448 "strip_size_kb": 64, 00:14:23.448 "state": "online", 00:14:23.448 "raid_level": "raid5f", 00:14:23.448 "superblock": false, 00:14:23.448 "num_base_bdevs": 3, 00:14:23.448 "num_base_bdevs_discovered": 2, 00:14:23.448 "num_base_bdevs_operational": 2, 00:14:23.448 "base_bdevs_list": [ 00:14:23.448 { 00:14:23.448 "name": null, 00:14:23.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.448 "is_configured": false, 00:14:23.448 "data_offset": 0, 00:14:23.448 "data_size": 65536 00:14:23.448 }, 00:14:23.448 { 00:14:23.448 "name": "BaseBdev2", 00:14:23.448 "uuid": "925c6f2f-0861-58a9-9cfc-21eb254256d2", 00:14:23.448 "is_configured": true, 00:14:23.448 "data_offset": 0, 00:14:23.448 "data_size": 65536 00:14:23.448 }, 00:14:23.448 { 00:14:23.448 "name": "BaseBdev3", 00:14:23.448 "uuid": "61316f01-ef2d-53c7-b90c-69027628da22", 00:14:23.448 "is_configured": true, 00:14:23.448 "data_offset": 0, 00:14:23.448 "data_size": 65536 00:14:23.448 } 00:14:23.448 ] 00:14:23.448 }' 00:14:23.448 04:12:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.708 04:12:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:23.708 04:12:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.708 04:12:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:23.708 04:12:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:23.708 04:12:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.708 04:12:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.708 [2024-11-21 04:12:23.500559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:23.708 [2024-11-21 04:12:23.507486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:14:23.708 04:12:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.708 04:12:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:23.708 [2024-11-21 04:12:23.509962] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:24.648 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.648 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.648 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.648 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.648 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.648 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.648 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.648 04:12:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.648 04:12:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.648 04:12:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.648 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.648 "name": "raid_bdev1", 00:14:24.648 "uuid": "b078e92e-c567-4df7-b401-faa4a978efb8", 00:14:24.648 "strip_size_kb": 64, 00:14:24.648 "state": "online", 00:14:24.648 "raid_level": "raid5f", 00:14:24.648 "superblock": false, 00:14:24.648 "num_base_bdevs": 3, 00:14:24.648 "num_base_bdevs_discovered": 3, 00:14:24.648 "num_base_bdevs_operational": 3, 00:14:24.648 "process": { 00:14:24.648 "type": "rebuild", 00:14:24.648 "target": "spare", 00:14:24.648 "progress": { 00:14:24.648 "blocks": 20480, 00:14:24.648 "percent": 15 00:14:24.648 } 00:14:24.648 }, 00:14:24.648 "base_bdevs_list": [ 00:14:24.648 { 00:14:24.648 "name": "spare", 00:14:24.648 "uuid": "249ea6f8-7d33-5408-bc3a-9fcc9653f8a6", 00:14:24.648 "is_configured": true, 00:14:24.648 "data_offset": 0, 00:14:24.648 "data_size": 65536 00:14:24.648 }, 00:14:24.648 { 00:14:24.648 "name": "BaseBdev2", 00:14:24.648 "uuid": "925c6f2f-0861-58a9-9cfc-21eb254256d2", 00:14:24.648 "is_configured": true, 00:14:24.648 "data_offset": 0, 00:14:24.648 "data_size": 65536 00:14:24.648 }, 00:14:24.648 { 00:14:24.648 "name": "BaseBdev3", 00:14:24.648 "uuid": "61316f01-ef2d-53c7-b90c-69027628da22", 00:14:24.648 "is_configured": true, 00:14:24.648 "data_offset": 0, 00:14:24.648 "data_size": 65536 00:14:24.648 } 00:14:24.648 ] 00:14:24.648 }' 00:14:24.648 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.648 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.648 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.909 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.909 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:24.909 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:24.909 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:24.909 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=460 00:14:24.909 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:24.909 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.909 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.909 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.909 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.909 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.909 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.909 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.909 04:12:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.909 04:12:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.909 04:12:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.909 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.909 "name": "raid_bdev1", 00:14:24.909 "uuid": "b078e92e-c567-4df7-b401-faa4a978efb8", 00:14:24.909 "strip_size_kb": 64, 00:14:24.909 "state": "online", 00:14:24.909 "raid_level": "raid5f", 00:14:24.909 "superblock": false, 00:14:24.909 "num_base_bdevs": 3, 00:14:24.909 "num_base_bdevs_discovered": 3, 00:14:24.909 "num_base_bdevs_operational": 3, 00:14:24.909 "process": { 00:14:24.909 "type": "rebuild", 00:14:24.909 "target": "spare", 00:14:24.909 "progress": { 00:14:24.909 "blocks": 22528, 00:14:24.909 "percent": 17 00:14:24.909 } 00:14:24.909 }, 00:14:24.909 "base_bdevs_list": [ 00:14:24.909 { 00:14:24.909 "name": "spare", 00:14:24.909 "uuid": "249ea6f8-7d33-5408-bc3a-9fcc9653f8a6", 00:14:24.909 "is_configured": true, 00:14:24.909 "data_offset": 0, 00:14:24.909 "data_size": 65536 00:14:24.909 }, 00:14:24.909 { 00:14:24.909 "name": "BaseBdev2", 00:14:24.909 "uuid": "925c6f2f-0861-58a9-9cfc-21eb254256d2", 00:14:24.909 "is_configured": true, 00:14:24.909 "data_offset": 0, 00:14:24.909 "data_size": 65536 00:14:24.909 }, 00:14:24.909 { 00:14:24.909 "name": "BaseBdev3", 00:14:24.909 "uuid": "61316f01-ef2d-53c7-b90c-69027628da22", 00:14:24.909 "is_configured": true, 00:14:24.909 "data_offset": 0, 00:14:24.909 "data_size": 65536 00:14:24.909 } 00:14:24.909 ] 00:14:24.909 }' 00:14:24.909 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.909 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.909 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.909 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.909 04:12:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:25.848 04:12:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:25.848 04:12:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.848 04:12:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.848 04:12:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.848 04:12:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.848 04:12:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.108 04:12:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.108 04:12:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.108 04:12:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.108 04:12:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.108 04:12:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.108 04:12:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.108 "name": "raid_bdev1", 00:14:26.108 "uuid": "b078e92e-c567-4df7-b401-faa4a978efb8", 00:14:26.108 "strip_size_kb": 64, 00:14:26.108 "state": "online", 00:14:26.108 "raid_level": "raid5f", 00:14:26.108 "superblock": false, 00:14:26.108 "num_base_bdevs": 3, 00:14:26.108 "num_base_bdevs_discovered": 3, 00:14:26.108 "num_base_bdevs_operational": 3, 00:14:26.108 "process": { 00:14:26.108 "type": "rebuild", 00:14:26.108 "target": "spare", 00:14:26.108 "progress": { 00:14:26.108 "blocks": 47104, 00:14:26.108 "percent": 35 00:14:26.108 } 00:14:26.108 }, 00:14:26.108 "base_bdevs_list": [ 00:14:26.108 { 00:14:26.108 "name": "spare", 00:14:26.108 "uuid": "249ea6f8-7d33-5408-bc3a-9fcc9653f8a6", 00:14:26.108 "is_configured": true, 00:14:26.108 "data_offset": 0, 00:14:26.108 "data_size": 65536 00:14:26.108 }, 00:14:26.108 { 00:14:26.108 "name": "BaseBdev2", 00:14:26.108 "uuid": "925c6f2f-0861-58a9-9cfc-21eb254256d2", 00:14:26.108 "is_configured": true, 00:14:26.108 "data_offset": 0, 00:14:26.108 "data_size": 65536 00:14:26.108 }, 00:14:26.108 { 00:14:26.108 "name": "BaseBdev3", 00:14:26.108 "uuid": "61316f01-ef2d-53c7-b90c-69027628da22", 00:14:26.108 "is_configured": true, 00:14:26.108 "data_offset": 0, 00:14:26.108 "data_size": 65536 00:14:26.108 } 00:14:26.108 ] 00:14:26.108 }' 00:14:26.108 04:12:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.108 04:12:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.108 04:12:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.108 04:12:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.108 04:12:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:27.048 04:12:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:27.048 04:12:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.048 04:12:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.048 04:12:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.048 04:12:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.048 04:12:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.048 04:12:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.048 04:12:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.048 04:12:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.048 04:12:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.048 04:12:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.308 04:12:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.308 "name": "raid_bdev1", 00:14:27.308 "uuid": "b078e92e-c567-4df7-b401-faa4a978efb8", 00:14:27.308 "strip_size_kb": 64, 00:14:27.308 "state": "online", 00:14:27.308 "raid_level": "raid5f", 00:14:27.308 "superblock": false, 00:14:27.308 "num_base_bdevs": 3, 00:14:27.308 "num_base_bdevs_discovered": 3, 00:14:27.308 "num_base_bdevs_operational": 3, 00:14:27.308 "process": { 00:14:27.308 "type": "rebuild", 00:14:27.308 "target": "spare", 00:14:27.308 "progress": { 00:14:27.308 "blocks": 69632, 00:14:27.308 "percent": 53 00:14:27.308 } 00:14:27.308 }, 00:14:27.308 "base_bdevs_list": [ 00:14:27.308 { 00:14:27.308 "name": "spare", 00:14:27.308 "uuid": "249ea6f8-7d33-5408-bc3a-9fcc9653f8a6", 00:14:27.308 "is_configured": true, 00:14:27.308 "data_offset": 0, 00:14:27.308 "data_size": 65536 00:14:27.308 }, 00:14:27.308 { 00:14:27.308 "name": "BaseBdev2", 00:14:27.308 "uuid": "925c6f2f-0861-58a9-9cfc-21eb254256d2", 00:14:27.308 "is_configured": true, 00:14:27.308 "data_offset": 0, 00:14:27.308 "data_size": 65536 00:14:27.308 }, 00:14:27.308 { 00:14:27.308 "name": "BaseBdev3", 00:14:27.308 "uuid": "61316f01-ef2d-53c7-b90c-69027628da22", 00:14:27.308 "is_configured": true, 00:14:27.308 "data_offset": 0, 00:14:27.308 "data_size": 65536 00:14:27.308 } 00:14:27.308 ] 00:14:27.308 }' 00:14:27.308 04:12:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.308 04:12:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.308 04:12:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.308 04:12:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.308 04:12:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:28.247 04:12:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:28.247 04:12:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.247 04:12:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.247 04:12:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.247 04:12:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.247 04:12:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.247 04:12:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.247 04:12:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.247 04:12:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.247 04:12:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.247 04:12:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.247 04:12:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.247 "name": "raid_bdev1", 00:14:28.247 "uuid": "b078e92e-c567-4df7-b401-faa4a978efb8", 00:14:28.247 "strip_size_kb": 64, 00:14:28.247 "state": "online", 00:14:28.247 "raid_level": "raid5f", 00:14:28.247 "superblock": false, 00:14:28.247 "num_base_bdevs": 3, 00:14:28.247 "num_base_bdevs_discovered": 3, 00:14:28.247 "num_base_bdevs_operational": 3, 00:14:28.247 "process": { 00:14:28.247 "type": "rebuild", 00:14:28.247 "target": "spare", 00:14:28.247 "progress": { 00:14:28.247 "blocks": 94208, 00:14:28.247 "percent": 71 00:14:28.247 } 00:14:28.247 }, 00:14:28.247 "base_bdevs_list": [ 00:14:28.247 { 00:14:28.247 "name": "spare", 00:14:28.247 "uuid": "249ea6f8-7d33-5408-bc3a-9fcc9653f8a6", 00:14:28.247 "is_configured": true, 00:14:28.247 "data_offset": 0, 00:14:28.247 "data_size": 65536 00:14:28.247 }, 00:14:28.247 { 00:14:28.247 "name": "BaseBdev2", 00:14:28.247 "uuid": "925c6f2f-0861-58a9-9cfc-21eb254256d2", 00:14:28.247 "is_configured": true, 00:14:28.247 "data_offset": 0, 00:14:28.247 "data_size": 65536 00:14:28.247 }, 00:14:28.247 { 00:14:28.247 "name": "BaseBdev3", 00:14:28.247 "uuid": "61316f01-ef2d-53c7-b90c-69027628da22", 00:14:28.247 "is_configured": true, 00:14:28.247 "data_offset": 0, 00:14:28.247 "data_size": 65536 00:14:28.247 } 00:14:28.247 ] 00:14:28.247 }' 00:14:28.247 04:12:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.507 04:12:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.507 04:12:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.508 04:12:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.508 04:12:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:29.446 04:12:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:29.446 04:12:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.446 04:12:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.446 04:12:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.446 04:12:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.446 04:12:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.446 04:12:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.446 04:12:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.446 04:12:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.446 04:12:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.446 04:12:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.446 04:12:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.446 "name": "raid_bdev1", 00:14:29.446 "uuid": "b078e92e-c567-4df7-b401-faa4a978efb8", 00:14:29.446 "strip_size_kb": 64, 00:14:29.446 "state": "online", 00:14:29.446 "raid_level": "raid5f", 00:14:29.446 "superblock": false, 00:14:29.446 "num_base_bdevs": 3, 00:14:29.446 "num_base_bdevs_discovered": 3, 00:14:29.446 "num_base_bdevs_operational": 3, 00:14:29.446 "process": { 00:14:29.446 "type": "rebuild", 00:14:29.446 "target": "spare", 00:14:29.446 "progress": { 00:14:29.446 "blocks": 116736, 00:14:29.446 "percent": 89 00:14:29.446 } 00:14:29.446 }, 00:14:29.446 "base_bdevs_list": [ 00:14:29.446 { 00:14:29.446 "name": "spare", 00:14:29.446 "uuid": "249ea6f8-7d33-5408-bc3a-9fcc9653f8a6", 00:14:29.446 "is_configured": true, 00:14:29.446 "data_offset": 0, 00:14:29.446 "data_size": 65536 00:14:29.446 }, 00:14:29.446 { 00:14:29.446 "name": "BaseBdev2", 00:14:29.446 "uuid": "925c6f2f-0861-58a9-9cfc-21eb254256d2", 00:14:29.446 "is_configured": true, 00:14:29.446 "data_offset": 0, 00:14:29.446 "data_size": 65536 00:14:29.446 }, 00:14:29.446 { 00:14:29.446 "name": "BaseBdev3", 00:14:29.446 "uuid": "61316f01-ef2d-53c7-b90c-69027628da22", 00:14:29.446 "is_configured": true, 00:14:29.446 "data_offset": 0, 00:14:29.446 "data_size": 65536 00:14:29.446 } 00:14:29.446 ] 00:14:29.446 }' 00:14:29.446 04:12:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.446 04:12:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.446 04:12:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.705 04:12:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.706 04:12:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:30.275 [2024-11-21 04:12:29.951122] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:30.275 [2024-11-21 04:12:29.951252] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:30.275 [2024-11-21 04:12:29.951339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.536 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:30.536 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.536 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.536 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.536 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.536 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.536 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.536 04:12:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.536 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.536 04:12:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.536 04:12:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.536 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.536 "name": "raid_bdev1", 00:14:30.536 "uuid": "b078e92e-c567-4df7-b401-faa4a978efb8", 00:14:30.536 "strip_size_kb": 64, 00:14:30.536 "state": "online", 00:14:30.536 "raid_level": "raid5f", 00:14:30.536 "superblock": false, 00:14:30.536 "num_base_bdevs": 3, 00:14:30.536 "num_base_bdevs_discovered": 3, 00:14:30.536 "num_base_bdevs_operational": 3, 00:14:30.536 "base_bdevs_list": [ 00:14:30.536 { 00:14:30.536 "name": "spare", 00:14:30.536 "uuid": "249ea6f8-7d33-5408-bc3a-9fcc9653f8a6", 00:14:30.536 "is_configured": true, 00:14:30.536 "data_offset": 0, 00:14:30.536 "data_size": 65536 00:14:30.536 }, 00:14:30.536 { 00:14:30.536 "name": "BaseBdev2", 00:14:30.536 "uuid": "925c6f2f-0861-58a9-9cfc-21eb254256d2", 00:14:30.536 "is_configured": true, 00:14:30.536 "data_offset": 0, 00:14:30.536 "data_size": 65536 00:14:30.536 }, 00:14:30.536 { 00:14:30.536 "name": "BaseBdev3", 00:14:30.536 "uuid": "61316f01-ef2d-53c7-b90c-69027628da22", 00:14:30.536 "is_configured": true, 00:14:30.536 "data_offset": 0, 00:14:30.536 "data_size": 65536 00:14:30.536 } 00:14:30.536 ] 00:14:30.536 }' 00:14:30.536 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.796 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:30.796 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.796 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:30.796 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:30.796 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:30.796 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.796 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:30.796 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:30.796 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.796 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.796 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.796 04:12:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.796 04:12:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.796 04:12:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.796 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.796 "name": "raid_bdev1", 00:14:30.796 "uuid": "b078e92e-c567-4df7-b401-faa4a978efb8", 00:14:30.796 "strip_size_kb": 64, 00:14:30.796 "state": "online", 00:14:30.796 "raid_level": "raid5f", 00:14:30.797 "superblock": false, 00:14:30.797 "num_base_bdevs": 3, 00:14:30.797 "num_base_bdevs_discovered": 3, 00:14:30.797 "num_base_bdevs_operational": 3, 00:14:30.797 "base_bdevs_list": [ 00:14:30.797 { 00:14:30.797 "name": "spare", 00:14:30.797 "uuid": "249ea6f8-7d33-5408-bc3a-9fcc9653f8a6", 00:14:30.797 "is_configured": true, 00:14:30.797 "data_offset": 0, 00:14:30.797 "data_size": 65536 00:14:30.797 }, 00:14:30.797 { 00:14:30.797 "name": "BaseBdev2", 00:14:30.797 "uuid": "925c6f2f-0861-58a9-9cfc-21eb254256d2", 00:14:30.797 "is_configured": true, 00:14:30.797 "data_offset": 0, 00:14:30.797 "data_size": 65536 00:14:30.797 }, 00:14:30.797 { 00:14:30.797 "name": "BaseBdev3", 00:14:30.797 "uuid": "61316f01-ef2d-53c7-b90c-69027628da22", 00:14:30.797 "is_configured": true, 00:14:30.797 "data_offset": 0, 00:14:30.797 "data_size": 65536 00:14:30.797 } 00:14:30.797 ] 00:14:30.797 }' 00:14:30.797 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.797 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:30.797 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.797 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:30.797 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:30.797 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.797 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.797 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.797 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.797 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.797 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.797 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.797 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.797 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.797 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.797 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.797 04:12:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.797 04:12:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.797 04:12:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.056 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.056 "name": "raid_bdev1", 00:14:31.056 "uuid": "b078e92e-c567-4df7-b401-faa4a978efb8", 00:14:31.056 "strip_size_kb": 64, 00:14:31.056 "state": "online", 00:14:31.056 "raid_level": "raid5f", 00:14:31.056 "superblock": false, 00:14:31.056 "num_base_bdevs": 3, 00:14:31.056 "num_base_bdevs_discovered": 3, 00:14:31.056 "num_base_bdevs_operational": 3, 00:14:31.056 "base_bdevs_list": [ 00:14:31.056 { 00:14:31.056 "name": "spare", 00:14:31.056 "uuid": "249ea6f8-7d33-5408-bc3a-9fcc9653f8a6", 00:14:31.056 "is_configured": true, 00:14:31.056 "data_offset": 0, 00:14:31.056 "data_size": 65536 00:14:31.056 }, 00:14:31.056 { 00:14:31.056 "name": "BaseBdev2", 00:14:31.056 "uuid": "925c6f2f-0861-58a9-9cfc-21eb254256d2", 00:14:31.056 "is_configured": true, 00:14:31.056 "data_offset": 0, 00:14:31.056 "data_size": 65536 00:14:31.056 }, 00:14:31.056 { 00:14:31.056 "name": "BaseBdev3", 00:14:31.056 "uuid": "61316f01-ef2d-53c7-b90c-69027628da22", 00:14:31.056 "is_configured": true, 00:14:31.056 "data_offset": 0, 00:14:31.056 "data_size": 65536 00:14:31.056 } 00:14:31.056 ] 00:14:31.056 }' 00:14:31.056 04:12:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.056 04:12:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.317 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:31.317 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.317 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.317 [2024-11-21 04:12:31.194876] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.317 [2024-11-21 04:12:31.194957] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.317 [2024-11-21 04:12:31.195091] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.317 [2024-11-21 04:12:31.195240] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.317 [2024-11-21 04:12:31.195287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:31.317 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.317 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.317 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:31.317 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.317 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.317 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.317 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:31.317 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:31.317 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:31.317 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:31.317 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:31.317 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:31.317 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:31.317 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:31.317 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:31.317 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:31.317 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:31.317 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:31.317 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:31.577 /dev/nbd0 00:14:31.577 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:31.577 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:31.577 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:31.578 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:31.578 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:31.578 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:31.578 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:31.578 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:31.578 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:31.578 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:31.578 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:31.578 1+0 records in 00:14:31.578 1+0 records out 00:14:31.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580635 s, 7.1 MB/s 00:14:31.578 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.578 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:31.578 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.578 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:31.578 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:31.578 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:31.578 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:31.578 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:31.838 /dev/nbd1 00:14:31.838 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:31.838 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:31.838 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:31.838 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:31.838 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:31.838 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:31.838 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:31.838 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:31.838 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:31.838 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:31.838 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:31.838 1+0 records in 00:14:31.838 1+0 records out 00:14:31.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402212 s, 10.2 MB/s 00:14:31.838 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.838 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:31.838 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.838 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:31.838 04:12:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:31.838 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:31.838 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:31.838 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:32.099 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:32.099 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:32.099 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:32.099 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:32.099 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:32.099 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:32.099 04:12:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:32.099 04:12:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:32.359 04:12:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:32.359 04:12:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:32.359 04:12:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:32.359 04:12:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:32.359 04:12:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:32.359 04:12:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:32.359 04:12:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:32.359 04:12:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:32.359 04:12:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:32.359 04:12:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:32.359 04:12:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:32.359 04:12:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:32.359 04:12:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:32.359 04:12:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:32.359 04:12:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:32.359 04:12:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:32.359 04:12:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:32.359 04:12:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:32.359 04:12:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 92140 00:14:32.359 04:12:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 92140 ']' 00:14:32.359 04:12:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 92140 00:14:32.359 04:12:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:32.359 04:12:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:32.359 04:12:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92140 00:14:32.619 04:12:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:32.619 killing process with pid 92140 00:14:32.619 Received shutdown signal, test time was about 60.000000 seconds 00:14:32.619 00:14:32.619 Latency(us) 00:14:32.619 [2024-11-21T04:12:32.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.619 [2024-11-21T04:12:32.592Z] =================================================================================================================== 00:14:32.619 [2024-11-21T04:12:32.592Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:32.619 04:12:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:32.619 04:12:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92140' 00:14:32.619 04:12:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 92140 00:14:32.619 [2024-11-21 04:12:32.336691] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:32.619 04:12:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 92140 00:14:32.619 [2024-11-21 04:12:32.409533] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:32.879 04:12:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:32.879 00:14:32.879 real 0m13.945s 00:14:32.879 user 0m17.345s 00:14:32.879 sys 0m2.120s 00:14:32.879 ************************************ 00:14:32.879 END TEST raid5f_rebuild_test 00:14:32.879 ************************************ 00:14:32.879 04:12:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.879 04:12:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.879 04:12:32 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:14:32.879 04:12:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:32.880 04:12:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.880 04:12:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:32.880 ************************************ 00:14:32.880 START TEST raid5f_rebuild_test_sb 00:14:32.880 ************************************ 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=92564 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 92564 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 92564 ']' 00:14:32.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:32.880 04:12:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.140 [2024-11-21 04:12:32.906527] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:14:33.140 [2024-11-21 04:12:32.906749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92564 ] 00:14:33.140 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:33.140 Zero copy mechanism will not be used. 00:14:33.140 [2024-11-21 04:12:33.065589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.140 [2024-11-21 04:12:33.105750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.399 [2024-11-21 04:12:33.182310] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.400 [2024-11-21 04:12:33.182427] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.969 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:33.969 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:33.969 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:33.969 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:33.969 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.969 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.969 BaseBdev1_malloc 00:14:33.969 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.969 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:33.969 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.969 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.969 [2024-11-21 04:12:33.745254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:33.969 [2024-11-21 04:12:33.745386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.969 [2024-11-21 04:12:33.745461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:33.969 [2024-11-21 04:12:33.745499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.969 [2024-11-21 04:12:33.748000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.969 [2024-11-21 04:12:33.748079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:33.969 BaseBdev1 00:14:33.969 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.969 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:33.969 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:33.969 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.969 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.969 BaseBdev2_malloc 00:14:33.969 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.969 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:33.969 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.969 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.969 [2024-11-21 04:12:33.779854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:33.969 [2024-11-21 04:12:33.779950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.969 [2024-11-21 04:12:33.779989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:33.969 [2024-11-21 04:12:33.780014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.969 [2024-11-21 04:12:33.782403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.969 [2024-11-21 04:12:33.782478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:33.969 BaseBdev2 00:14:33.969 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.969 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.970 BaseBdev3_malloc 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.970 [2024-11-21 04:12:33.814446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:33.970 [2024-11-21 04:12:33.814545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.970 [2024-11-21 04:12:33.814583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:33.970 [2024-11-21 04:12:33.814609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.970 [2024-11-21 04:12:33.817013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.970 [2024-11-21 04:12:33.817082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:33.970 BaseBdev3 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.970 spare_malloc 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.970 spare_delay 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.970 [2024-11-21 04:12:33.876540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:33.970 [2024-11-21 04:12:33.876662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.970 [2024-11-21 04:12:33.876722] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:33.970 [2024-11-21 04:12:33.876792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.970 [2024-11-21 04:12:33.879727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.970 [2024-11-21 04:12:33.879814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:33.970 spare 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.970 [2024-11-21 04:12:33.888643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:33.970 [2024-11-21 04:12:33.890724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:33.970 [2024-11-21 04:12:33.890823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:33.970 [2024-11-21 04:12:33.891052] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:33.970 [2024-11-21 04:12:33.891113] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:33.970 [2024-11-21 04:12:33.891418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:33.970 [2024-11-21 04:12:33.891919] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:33.970 [2024-11-21 04:12:33.891967] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:33.970 [2024-11-21 04:12:33.892175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.970 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.230 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.230 "name": "raid_bdev1", 00:14:34.230 "uuid": "10c29ace-1339-44ee-8a87-ea07fa9c7a85", 00:14:34.230 "strip_size_kb": 64, 00:14:34.230 "state": "online", 00:14:34.230 "raid_level": "raid5f", 00:14:34.230 "superblock": true, 00:14:34.230 "num_base_bdevs": 3, 00:14:34.230 "num_base_bdevs_discovered": 3, 00:14:34.230 "num_base_bdevs_operational": 3, 00:14:34.230 "base_bdevs_list": [ 00:14:34.230 { 00:14:34.230 "name": "BaseBdev1", 00:14:34.230 "uuid": "827477f9-2285-5a77-b2f0-76f8fdbb75a3", 00:14:34.230 "is_configured": true, 00:14:34.230 "data_offset": 2048, 00:14:34.230 "data_size": 63488 00:14:34.230 }, 00:14:34.230 { 00:14:34.230 "name": "BaseBdev2", 00:14:34.230 "uuid": "882e3d36-62f8-5199-900e-786ad3a2cadd", 00:14:34.230 "is_configured": true, 00:14:34.230 "data_offset": 2048, 00:14:34.230 "data_size": 63488 00:14:34.230 }, 00:14:34.230 { 00:14:34.230 "name": "BaseBdev3", 00:14:34.230 "uuid": "a7c7cdf3-2059-5dbd-9c80-37cd7ff0f826", 00:14:34.230 "is_configured": true, 00:14:34.230 "data_offset": 2048, 00:14:34.230 "data_size": 63488 00:14:34.230 } 00:14:34.230 ] 00:14:34.230 }' 00:14:34.230 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.230 04:12:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.490 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:34.490 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:34.490 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.490 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.490 [2024-11-21 04:12:34.345892] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:34.490 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.490 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:14:34.490 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.490 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:34.490 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.490 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.490 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.490 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:34.490 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:34.490 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:34.490 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:34.490 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:34.490 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:34.490 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:34.490 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:34.490 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:34.490 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:34.490 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:34.490 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:34.490 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:34.490 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:34.751 [2024-11-21 04:12:34.617371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:14:34.751 /dev/nbd0 00:14:34.751 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:34.751 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:34.751 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:34.751 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:34.751 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:34.751 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:34.751 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:34.751 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:34.751 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:34.751 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:34.751 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:34.751 1+0 records in 00:14:34.751 1+0 records out 00:14:34.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567323 s, 7.2 MB/s 00:14:34.751 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:34.751 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:34.751 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:34.751 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:34.751 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:34.751 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:34.751 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:34.751 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:34.751 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:34.751 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:34.751 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:14:35.321 496+0 records in 00:14:35.321 496+0 records out 00:14:35.321 65011712 bytes (65 MB, 62 MiB) copied, 0.288404 s, 225 MB/s 00:14:35.321 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:35.321 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:35.321 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:35.321 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:35.321 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:35.321 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:35.321 04:12:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:35.322 [2024-11-21 04:12:35.184173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.322 [2024-11-21 04:12:35.212113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.322 "name": "raid_bdev1", 00:14:35.322 "uuid": "10c29ace-1339-44ee-8a87-ea07fa9c7a85", 00:14:35.322 "strip_size_kb": 64, 00:14:35.322 "state": "online", 00:14:35.322 "raid_level": "raid5f", 00:14:35.322 "superblock": true, 00:14:35.322 "num_base_bdevs": 3, 00:14:35.322 "num_base_bdevs_discovered": 2, 00:14:35.322 "num_base_bdevs_operational": 2, 00:14:35.322 "base_bdevs_list": [ 00:14:35.322 { 00:14:35.322 "name": null, 00:14:35.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.322 "is_configured": false, 00:14:35.322 "data_offset": 0, 00:14:35.322 "data_size": 63488 00:14:35.322 }, 00:14:35.322 { 00:14:35.322 "name": "BaseBdev2", 00:14:35.322 "uuid": "882e3d36-62f8-5199-900e-786ad3a2cadd", 00:14:35.322 "is_configured": true, 00:14:35.322 "data_offset": 2048, 00:14:35.322 "data_size": 63488 00:14:35.322 }, 00:14:35.322 { 00:14:35.322 "name": "BaseBdev3", 00:14:35.322 "uuid": "a7c7cdf3-2059-5dbd-9c80-37cd7ff0f826", 00:14:35.322 "is_configured": true, 00:14:35.322 "data_offset": 2048, 00:14:35.322 "data_size": 63488 00:14:35.322 } 00:14:35.322 ] 00:14:35.322 }' 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.322 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.892 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:35.892 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.892 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.892 [2024-11-21 04:12:35.615398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:35.892 [2024-11-21 04:12:35.623320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000255d0 00:14:35.892 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.892 04:12:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:35.892 [2024-11-21 04:12:35.625785] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:36.832 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:36.832 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.832 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:36.832 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:36.832 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.832 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.832 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.832 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.832 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.832 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.832 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.832 "name": "raid_bdev1", 00:14:36.832 "uuid": "10c29ace-1339-44ee-8a87-ea07fa9c7a85", 00:14:36.832 "strip_size_kb": 64, 00:14:36.832 "state": "online", 00:14:36.832 "raid_level": "raid5f", 00:14:36.832 "superblock": true, 00:14:36.832 "num_base_bdevs": 3, 00:14:36.832 "num_base_bdevs_discovered": 3, 00:14:36.832 "num_base_bdevs_operational": 3, 00:14:36.832 "process": { 00:14:36.832 "type": "rebuild", 00:14:36.832 "target": "spare", 00:14:36.832 "progress": { 00:14:36.832 "blocks": 20480, 00:14:36.832 "percent": 16 00:14:36.832 } 00:14:36.832 }, 00:14:36.832 "base_bdevs_list": [ 00:14:36.832 { 00:14:36.832 "name": "spare", 00:14:36.833 "uuid": "948ea45d-0b63-5f23-ae19-422f706bc94a", 00:14:36.833 "is_configured": true, 00:14:36.833 "data_offset": 2048, 00:14:36.833 "data_size": 63488 00:14:36.833 }, 00:14:36.833 { 00:14:36.833 "name": "BaseBdev2", 00:14:36.833 "uuid": "882e3d36-62f8-5199-900e-786ad3a2cadd", 00:14:36.833 "is_configured": true, 00:14:36.833 "data_offset": 2048, 00:14:36.833 "data_size": 63488 00:14:36.833 }, 00:14:36.833 { 00:14:36.833 "name": "BaseBdev3", 00:14:36.833 "uuid": "a7c7cdf3-2059-5dbd-9c80-37cd7ff0f826", 00:14:36.833 "is_configured": true, 00:14:36.833 "data_offset": 2048, 00:14:36.833 "data_size": 63488 00:14:36.833 } 00:14:36.833 ] 00:14:36.833 }' 00:14:36.833 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.833 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:36.833 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.833 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.833 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:36.833 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.833 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.833 [2024-11-21 04:12:36.785816] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:37.092 [2024-11-21 04:12:36.834015] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:37.092 [2024-11-21 04:12:36.834121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.092 [2024-11-21 04:12:36.834139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:37.092 [2024-11-21 04:12:36.834157] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:37.092 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.092 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:37.092 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.092 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.092 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.092 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.092 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:37.092 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.092 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.092 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.092 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.092 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.092 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.092 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.092 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.092 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.092 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.092 "name": "raid_bdev1", 00:14:37.092 "uuid": "10c29ace-1339-44ee-8a87-ea07fa9c7a85", 00:14:37.092 "strip_size_kb": 64, 00:14:37.092 "state": "online", 00:14:37.092 "raid_level": "raid5f", 00:14:37.092 "superblock": true, 00:14:37.092 "num_base_bdevs": 3, 00:14:37.092 "num_base_bdevs_discovered": 2, 00:14:37.092 "num_base_bdevs_operational": 2, 00:14:37.092 "base_bdevs_list": [ 00:14:37.092 { 00:14:37.092 "name": null, 00:14:37.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.092 "is_configured": false, 00:14:37.092 "data_offset": 0, 00:14:37.092 "data_size": 63488 00:14:37.092 }, 00:14:37.092 { 00:14:37.092 "name": "BaseBdev2", 00:14:37.092 "uuid": "882e3d36-62f8-5199-900e-786ad3a2cadd", 00:14:37.092 "is_configured": true, 00:14:37.092 "data_offset": 2048, 00:14:37.092 "data_size": 63488 00:14:37.092 }, 00:14:37.092 { 00:14:37.092 "name": "BaseBdev3", 00:14:37.092 "uuid": "a7c7cdf3-2059-5dbd-9c80-37cd7ff0f826", 00:14:37.092 "is_configured": true, 00:14:37.092 "data_offset": 2048, 00:14:37.092 "data_size": 63488 00:14:37.092 } 00:14:37.092 ] 00:14:37.092 }' 00:14:37.092 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.092 04:12:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.351 04:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:37.351 04:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.351 04:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:37.351 04:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:37.351 04:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.351 04:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.351 04:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.351 04:12:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.351 04:12:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.351 04:12:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.611 04:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.611 "name": "raid_bdev1", 00:14:37.611 "uuid": "10c29ace-1339-44ee-8a87-ea07fa9c7a85", 00:14:37.611 "strip_size_kb": 64, 00:14:37.611 "state": "online", 00:14:37.611 "raid_level": "raid5f", 00:14:37.611 "superblock": true, 00:14:37.611 "num_base_bdevs": 3, 00:14:37.611 "num_base_bdevs_discovered": 2, 00:14:37.611 "num_base_bdevs_operational": 2, 00:14:37.611 "base_bdevs_list": [ 00:14:37.611 { 00:14:37.611 "name": null, 00:14:37.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.611 "is_configured": false, 00:14:37.611 "data_offset": 0, 00:14:37.611 "data_size": 63488 00:14:37.611 }, 00:14:37.611 { 00:14:37.611 "name": "BaseBdev2", 00:14:37.611 "uuid": "882e3d36-62f8-5199-900e-786ad3a2cadd", 00:14:37.611 "is_configured": true, 00:14:37.611 "data_offset": 2048, 00:14:37.611 "data_size": 63488 00:14:37.611 }, 00:14:37.611 { 00:14:37.611 "name": "BaseBdev3", 00:14:37.611 "uuid": "a7c7cdf3-2059-5dbd-9c80-37cd7ff0f826", 00:14:37.611 "is_configured": true, 00:14:37.611 "data_offset": 2048, 00:14:37.611 "data_size": 63488 00:14:37.611 } 00:14:37.611 ] 00:14:37.611 }' 00:14:37.611 04:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.611 04:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:37.611 04:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.611 04:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:37.611 04:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:37.611 04:12:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.611 04:12:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.611 [2024-11-21 04:12:37.407094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:37.611 [2024-11-21 04:12:37.413231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000256a0 00:14:37.611 04:12:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.611 04:12:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:37.611 [2024-11-21 04:12:37.415740] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:38.551 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:38.551 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.551 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:38.551 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:38.551 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.551 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.551 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.551 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.551 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.551 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.551 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.551 "name": "raid_bdev1", 00:14:38.551 "uuid": "10c29ace-1339-44ee-8a87-ea07fa9c7a85", 00:14:38.551 "strip_size_kb": 64, 00:14:38.551 "state": "online", 00:14:38.551 "raid_level": "raid5f", 00:14:38.551 "superblock": true, 00:14:38.551 "num_base_bdevs": 3, 00:14:38.551 "num_base_bdevs_discovered": 3, 00:14:38.551 "num_base_bdevs_operational": 3, 00:14:38.551 "process": { 00:14:38.551 "type": "rebuild", 00:14:38.551 "target": "spare", 00:14:38.551 "progress": { 00:14:38.551 "blocks": 20480, 00:14:38.551 "percent": 16 00:14:38.551 } 00:14:38.551 }, 00:14:38.551 "base_bdevs_list": [ 00:14:38.551 { 00:14:38.551 "name": "spare", 00:14:38.551 "uuid": "948ea45d-0b63-5f23-ae19-422f706bc94a", 00:14:38.551 "is_configured": true, 00:14:38.551 "data_offset": 2048, 00:14:38.551 "data_size": 63488 00:14:38.551 }, 00:14:38.551 { 00:14:38.551 "name": "BaseBdev2", 00:14:38.551 "uuid": "882e3d36-62f8-5199-900e-786ad3a2cadd", 00:14:38.551 "is_configured": true, 00:14:38.551 "data_offset": 2048, 00:14:38.551 "data_size": 63488 00:14:38.551 }, 00:14:38.551 { 00:14:38.551 "name": "BaseBdev3", 00:14:38.551 "uuid": "a7c7cdf3-2059-5dbd-9c80-37cd7ff0f826", 00:14:38.551 "is_configured": true, 00:14:38.551 "data_offset": 2048, 00:14:38.551 "data_size": 63488 00:14:38.551 } 00:14:38.551 ] 00:14:38.551 }' 00:14:38.551 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.551 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:38.551 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.812 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.812 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:38.812 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:38.812 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:38.812 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:38.812 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:38.812 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=474 00:14:38.812 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:38.812 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:38.812 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.812 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:38.812 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:38.812 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.812 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.812 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.812 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.812 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.812 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.812 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.812 "name": "raid_bdev1", 00:14:38.812 "uuid": "10c29ace-1339-44ee-8a87-ea07fa9c7a85", 00:14:38.812 "strip_size_kb": 64, 00:14:38.812 "state": "online", 00:14:38.812 "raid_level": "raid5f", 00:14:38.812 "superblock": true, 00:14:38.812 "num_base_bdevs": 3, 00:14:38.812 "num_base_bdevs_discovered": 3, 00:14:38.812 "num_base_bdevs_operational": 3, 00:14:38.812 "process": { 00:14:38.812 "type": "rebuild", 00:14:38.812 "target": "spare", 00:14:38.812 "progress": { 00:14:38.812 "blocks": 22528, 00:14:38.812 "percent": 17 00:14:38.812 } 00:14:38.812 }, 00:14:38.812 "base_bdevs_list": [ 00:14:38.812 { 00:14:38.812 "name": "spare", 00:14:38.812 "uuid": "948ea45d-0b63-5f23-ae19-422f706bc94a", 00:14:38.812 "is_configured": true, 00:14:38.812 "data_offset": 2048, 00:14:38.812 "data_size": 63488 00:14:38.812 }, 00:14:38.812 { 00:14:38.812 "name": "BaseBdev2", 00:14:38.812 "uuid": "882e3d36-62f8-5199-900e-786ad3a2cadd", 00:14:38.812 "is_configured": true, 00:14:38.812 "data_offset": 2048, 00:14:38.812 "data_size": 63488 00:14:38.812 }, 00:14:38.812 { 00:14:38.812 "name": "BaseBdev3", 00:14:38.812 "uuid": "a7c7cdf3-2059-5dbd-9c80-37cd7ff0f826", 00:14:38.812 "is_configured": true, 00:14:38.812 "data_offset": 2048, 00:14:38.812 "data_size": 63488 00:14:38.812 } 00:14:38.812 ] 00:14:38.812 }' 00:14:38.812 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.812 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:38.812 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.812 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.812 04:12:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:39.751 04:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:39.751 04:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.751 04:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.751 04:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.751 04:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.751 04:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.751 04:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.751 04:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.751 04:12:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.751 04:12:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.011 04:12:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.011 04:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.011 "name": "raid_bdev1", 00:14:40.011 "uuid": "10c29ace-1339-44ee-8a87-ea07fa9c7a85", 00:14:40.011 "strip_size_kb": 64, 00:14:40.011 "state": "online", 00:14:40.011 "raid_level": "raid5f", 00:14:40.011 "superblock": true, 00:14:40.011 "num_base_bdevs": 3, 00:14:40.011 "num_base_bdevs_discovered": 3, 00:14:40.011 "num_base_bdevs_operational": 3, 00:14:40.011 "process": { 00:14:40.011 "type": "rebuild", 00:14:40.011 "target": "spare", 00:14:40.011 "progress": { 00:14:40.011 "blocks": 45056, 00:14:40.011 "percent": 35 00:14:40.011 } 00:14:40.011 }, 00:14:40.011 "base_bdevs_list": [ 00:14:40.011 { 00:14:40.011 "name": "spare", 00:14:40.011 "uuid": "948ea45d-0b63-5f23-ae19-422f706bc94a", 00:14:40.011 "is_configured": true, 00:14:40.011 "data_offset": 2048, 00:14:40.011 "data_size": 63488 00:14:40.011 }, 00:14:40.011 { 00:14:40.011 "name": "BaseBdev2", 00:14:40.011 "uuid": "882e3d36-62f8-5199-900e-786ad3a2cadd", 00:14:40.011 "is_configured": true, 00:14:40.011 "data_offset": 2048, 00:14:40.011 "data_size": 63488 00:14:40.011 }, 00:14:40.011 { 00:14:40.011 "name": "BaseBdev3", 00:14:40.011 "uuid": "a7c7cdf3-2059-5dbd-9c80-37cd7ff0f826", 00:14:40.011 "is_configured": true, 00:14:40.011 "data_offset": 2048, 00:14:40.011 "data_size": 63488 00:14:40.011 } 00:14:40.011 ] 00:14:40.011 }' 00:14:40.011 04:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.011 04:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.011 04:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.011 04:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.011 04:12:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:40.951 04:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:40.951 04:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.951 04:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.951 04:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.951 04:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.951 04:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.951 04:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.951 04:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.951 04:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.951 04:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.951 04:12:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.951 04:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.951 "name": "raid_bdev1", 00:14:40.951 "uuid": "10c29ace-1339-44ee-8a87-ea07fa9c7a85", 00:14:40.951 "strip_size_kb": 64, 00:14:40.951 "state": "online", 00:14:40.951 "raid_level": "raid5f", 00:14:40.951 "superblock": true, 00:14:40.951 "num_base_bdevs": 3, 00:14:40.951 "num_base_bdevs_discovered": 3, 00:14:40.951 "num_base_bdevs_operational": 3, 00:14:40.951 "process": { 00:14:40.951 "type": "rebuild", 00:14:40.951 "target": "spare", 00:14:40.951 "progress": { 00:14:40.951 "blocks": 69632, 00:14:40.951 "percent": 54 00:14:40.951 } 00:14:40.951 }, 00:14:40.951 "base_bdevs_list": [ 00:14:40.951 { 00:14:40.951 "name": "spare", 00:14:40.951 "uuid": "948ea45d-0b63-5f23-ae19-422f706bc94a", 00:14:40.951 "is_configured": true, 00:14:40.951 "data_offset": 2048, 00:14:40.951 "data_size": 63488 00:14:40.951 }, 00:14:40.951 { 00:14:40.951 "name": "BaseBdev2", 00:14:40.951 "uuid": "882e3d36-62f8-5199-900e-786ad3a2cadd", 00:14:40.951 "is_configured": true, 00:14:40.951 "data_offset": 2048, 00:14:40.951 "data_size": 63488 00:14:40.951 }, 00:14:40.951 { 00:14:40.951 "name": "BaseBdev3", 00:14:40.951 "uuid": "a7c7cdf3-2059-5dbd-9c80-37cd7ff0f826", 00:14:40.951 "is_configured": true, 00:14:40.951 "data_offset": 2048, 00:14:40.951 "data_size": 63488 00:14:40.951 } 00:14:40.951 ] 00:14:40.951 }' 00:14:40.951 04:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.211 04:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.211 04:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.211 04:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.211 04:12:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:42.150 04:12:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:42.150 04:12:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.150 04:12:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.150 04:12:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.150 04:12:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.150 04:12:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.150 04:12:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.150 04:12:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.150 04:12:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.150 04:12:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.150 04:12:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.150 04:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.150 "name": "raid_bdev1", 00:14:42.150 "uuid": "10c29ace-1339-44ee-8a87-ea07fa9c7a85", 00:14:42.150 "strip_size_kb": 64, 00:14:42.150 "state": "online", 00:14:42.151 "raid_level": "raid5f", 00:14:42.151 "superblock": true, 00:14:42.151 "num_base_bdevs": 3, 00:14:42.151 "num_base_bdevs_discovered": 3, 00:14:42.151 "num_base_bdevs_operational": 3, 00:14:42.151 "process": { 00:14:42.151 "type": "rebuild", 00:14:42.151 "target": "spare", 00:14:42.151 "progress": { 00:14:42.151 "blocks": 92160, 00:14:42.151 "percent": 72 00:14:42.151 } 00:14:42.151 }, 00:14:42.151 "base_bdevs_list": [ 00:14:42.151 { 00:14:42.151 "name": "spare", 00:14:42.151 "uuid": "948ea45d-0b63-5f23-ae19-422f706bc94a", 00:14:42.151 "is_configured": true, 00:14:42.151 "data_offset": 2048, 00:14:42.151 "data_size": 63488 00:14:42.151 }, 00:14:42.151 { 00:14:42.151 "name": "BaseBdev2", 00:14:42.151 "uuid": "882e3d36-62f8-5199-900e-786ad3a2cadd", 00:14:42.151 "is_configured": true, 00:14:42.151 "data_offset": 2048, 00:14:42.151 "data_size": 63488 00:14:42.151 }, 00:14:42.151 { 00:14:42.151 "name": "BaseBdev3", 00:14:42.151 "uuid": "a7c7cdf3-2059-5dbd-9c80-37cd7ff0f826", 00:14:42.151 "is_configured": true, 00:14:42.151 "data_offset": 2048, 00:14:42.151 "data_size": 63488 00:14:42.151 } 00:14:42.151 ] 00:14:42.151 }' 00:14:42.151 04:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.151 04:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.151 04:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.410 04:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.410 04:12:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:43.352 04:12:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:43.352 04:12:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.352 04:12:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.352 04:12:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.352 04:12:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.352 04:12:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.352 04:12:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.352 04:12:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.352 04:12:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.352 04:12:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.352 04:12:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.352 04:12:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.352 "name": "raid_bdev1", 00:14:43.352 "uuid": "10c29ace-1339-44ee-8a87-ea07fa9c7a85", 00:14:43.352 "strip_size_kb": 64, 00:14:43.352 "state": "online", 00:14:43.352 "raid_level": "raid5f", 00:14:43.352 "superblock": true, 00:14:43.352 "num_base_bdevs": 3, 00:14:43.352 "num_base_bdevs_discovered": 3, 00:14:43.352 "num_base_bdevs_operational": 3, 00:14:43.352 "process": { 00:14:43.352 "type": "rebuild", 00:14:43.352 "target": "spare", 00:14:43.352 "progress": { 00:14:43.352 "blocks": 116736, 00:14:43.352 "percent": 91 00:14:43.352 } 00:14:43.352 }, 00:14:43.352 "base_bdevs_list": [ 00:14:43.352 { 00:14:43.352 "name": "spare", 00:14:43.352 "uuid": "948ea45d-0b63-5f23-ae19-422f706bc94a", 00:14:43.352 "is_configured": true, 00:14:43.352 "data_offset": 2048, 00:14:43.352 "data_size": 63488 00:14:43.352 }, 00:14:43.352 { 00:14:43.352 "name": "BaseBdev2", 00:14:43.352 "uuid": "882e3d36-62f8-5199-900e-786ad3a2cadd", 00:14:43.352 "is_configured": true, 00:14:43.352 "data_offset": 2048, 00:14:43.352 "data_size": 63488 00:14:43.352 }, 00:14:43.352 { 00:14:43.352 "name": "BaseBdev3", 00:14:43.352 "uuid": "a7c7cdf3-2059-5dbd-9c80-37cd7ff0f826", 00:14:43.352 "is_configured": true, 00:14:43.352 "data_offset": 2048, 00:14:43.352 "data_size": 63488 00:14:43.352 } 00:14:43.352 ] 00:14:43.352 }' 00:14:43.352 04:12:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.352 04:12:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.352 04:12:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.352 04:12:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.353 04:12:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:43.922 [2024-11-21 04:12:43.654381] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:43.922 [2024-11-21 04:12:43.654445] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:43.922 [2024-11-21 04:12:43.654555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.492 "name": "raid_bdev1", 00:14:44.492 "uuid": "10c29ace-1339-44ee-8a87-ea07fa9c7a85", 00:14:44.492 "strip_size_kb": 64, 00:14:44.492 "state": "online", 00:14:44.492 "raid_level": "raid5f", 00:14:44.492 "superblock": true, 00:14:44.492 "num_base_bdevs": 3, 00:14:44.492 "num_base_bdevs_discovered": 3, 00:14:44.492 "num_base_bdevs_operational": 3, 00:14:44.492 "base_bdevs_list": [ 00:14:44.492 { 00:14:44.492 "name": "spare", 00:14:44.492 "uuid": "948ea45d-0b63-5f23-ae19-422f706bc94a", 00:14:44.492 "is_configured": true, 00:14:44.492 "data_offset": 2048, 00:14:44.492 "data_size": 63488 00:14:44.492 }, 00:14:44.492 { 00:14:44.492 "name": "BaseBdev2", 00:14:44.492 "uuid": "882e3d36-62f8-5199-900e-786ad3a2cadd", 00:14:44.492 "is_configured": true, 00:14:44.492 "data_offset": 2048, 00:14:44.492 "data_size": 63488 00:14:44.492 }, 00:14:44.492 { 00:14:44.492 "name": "BaseBdev3", 00:14:44.492 "uuid": "a7c7cdf3-2059-5dbd-9c80-37cd7ff0f826", 00:14:44.492 "is_configured": true, 00:14:44.492 "data_offset": 2048, 00:14:44.492 "data_size": 63488 00:14:44.492 } 00:14:44.492 ] 00:14:44.492 }' 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.492 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.492 "name": "raid_bdev1", 00:14:44.492 "uuid": "10c29ace-1339-44ee-8a87-ea07fa9c7a85", 00:14:44.492 "strip_size_kb": 64, 00:14:44.492 "state": "online", 00:14:44.492 "raid_level": "raid5f", 00:14:44.492 "superblock": true, 00:14:44.492 "num_base_bdevs": 3, 00:14:44.492 "num_base_bdevs_discovered": 3, 00:14:44.492 "num_base_bdevs_operational": 3, 00:14:44.492 "base_bdevs_list": [ 00:14:44.492 { 00:14:44.492 "name": "spare", 00:14:44.492 "uuid": "948ea45d-0b63-5f23-ae19-422f706bc94a", 00:14:44.492 "is_configured": true, 00:14:44.492 "data_offset": 2048, 00:14:44.492 "data_size": 63488 00:14:44.492 }, 00:14:44.492 { 00:14:44.492 "name": "BaseBdev2", 00:14:44.492 "uuid": "882e3d36-62f8-5199-900e-786ad3a2cadd", 00:14:44.492 "is_configured": true, 00:14:44.492 "data_offset": 2048, 00:14:44.492 "data_size": 63488 00:14:44.492 }, 00:14:44.492 { 00:14:44.492 "name": "BaseBdev3", 00:14:44.492 "uuid": "a7c7cdf3-2059-5dbd-9c80-37cd7ff0f826", 00:14:44.492 "is_configured": true, 00:14:44.492 "data_offset": 2048, 00:14:44.492 "data_size": 63488 00:14:44.492 } 00:14:44.492 ] 00:14:44.492 }' 00:14:44.752 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.752 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:44.752 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.752 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:44.752 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:44.752 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.752 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.752 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.752 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.752 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.752 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.752 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.752 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.752 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.752 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.752 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.752 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.752 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.752 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.752 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.752 "name": "raid_bdev1", 00:14:44.752 "uuid": "10c29ace-1339-44ee-8a87-ea07fa9c7a85", 00:14:44.752 "strip_size_kb": 64, 00:14:44.752 "state": "online", 00:14:44.752 "raid_level": "raid5f", 00:14:44.752 "superblock": true, 00:14:44.752 "num_base_bdevs": 3, 00:14:44.752 "num_base_bdevs_discovered": 3, 00:14:44.752 "num_base_bdevs_operational": 3, 00:14:44.752 "base_bdevs_list": [ 00:14:44.752 { 00:14:44.752 "name": "spare", 00:14:44.752 "uuid": "948ea45d-0b63-5f23-ae19-422f706bc94a", 00:14:44.752 "is_configured": true, 00:14:44.752 "data_offset": 2048, 00:14:44.752 "data_size": 63488 00:14:44.752 }, 00:14:44.752 { 00:14:44.752 "name": "BaseBdev2", 00:14:44.752 "uuid": "882e3d36-62f8-5199-900e-786ad3a2cadd", 00:14:44.752 "is_configured": true, 00:14:44.752 "data_offset": 2048, 00:14:44.752 "data_size": 63488 00:14:44.752 }, 00:14:44.752 { 00:14:44.752 "name": "BaseBdev3", 00:14:44.752 "uuid": "a7c7cdf3-2059-5dbd-9c80-37cd7ff0f826", 00:14:44.752 "is_configured": true, 00:14:44.752 "data_offset": 2048, 00:14:44.752 "data_size": 63488 00:14:44.752 } 00:14:44.752 ] 00:14:44.752 }' 00:14:44.752 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.752 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.323 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:45.323 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.323 04:12:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.323 [2024-11-21 04:12:45.001774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:45.323 [2024-11-21 04:12:45.001856] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:45.323 [2024-11-21 04:12:45.001985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.323 [2024-11-21 04:12:45.002118] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:45.323 [2024-11-21 04:12:45.002163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:45.323 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.323 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.323 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:45.323 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.323 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.323 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.323 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:45.323 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:45.323 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:45.323 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:45.323 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.323 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:45.323 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:45.323 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:45.323 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:45.323 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:45.323 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:45.323 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:45.323 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:45.323 /dev/nbd0 00:14:45.323 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:45.584 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:45.584 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:45.585 1+0 records in 00:14:45.585 1+0 records out 00:14:45.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000591813 s, 6.9 MB/s 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:45.585 /dev/nbd1 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:45.585 1+0 records in 00:14:45.585 1+0 records out 00:14:45.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444388 s, 9.2 MB/s 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.585 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:45.845 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:45.845 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:45.845 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:45.845 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:45.845 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:45.845 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.845 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:45.845 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:45.845 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:45.845 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:45.845 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:45.845 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:46.106 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:46.106 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:46.106 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:46.106 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:46.106 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:46.106 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:46.106 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:46.106 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:46.106 04:12:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:46.106 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:46.106 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:46.106 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:46.106 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:46.106 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:46.106 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:46.106 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:46.106 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:46.106 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:46.106 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:46.106 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.106 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.106 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.106 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:46.106 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.106 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.106 [2024-11-21 04:12:46.056337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:46.106 [2024-11-21 04:12:46.056477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.106 [2024-11-21 04:12:46.056536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:46.106 [2024-11-21 04:12:46.056605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.106 [2024-11-21 04:12:46.059917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.106 [2024-11-21 04:12:46.059997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:46.106 spare 00:14:46.106 [2024-11-21 04:12:46.060124] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:46.106 [2024-11-21 04:12:46.060194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:46.106 [2024-11-21 04:12:46.060382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:46.106 [2024-11-21 04:12:46.060543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:46.106 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.106 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:46.106 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.106 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.366 [2024-11-21 04:12:46.160455] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:14:46.366 [2024-11-21 04:12:46.160528] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:46.366 [2024-11-21 04:12:46.160901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043d50 00:14:46.366 [2024-11-21 04:12:46.161367] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:14:46.366 [2024-11-21 04:12:46.161390] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:14:46.366 [2024-11-21 04:12:46.161555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.366 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.366 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:46.366 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.366 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.366 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.366 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.366 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.366 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.366 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.366 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.366 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.366 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.366 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.366 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.366 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.366 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.366 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.366 "name": "raid_bdev1", 00:14:46.366 "uuid": "10c29ace-1339-44ee-8a87-ea07fa9c7a85", 00:14:46.366 "strip_size_kb": 64, 00:14:46.366 "state": "online", 00:14:46.366 "raid_level": "raid5f", 00:14:46.366 "superblock": true, 00:14:46.366 "num_base_bdevs": 3, 00:14:46.366 "num_base_bdevs_discovered": 3, 00:14:46.366 "num_base_bdevs_operational": 3, 00:14:46.366 "base_bdevs_list": [ 00:14:46.366 { 00:14:46.366 "name": "spare", 00:14:46.366 "uuid": "948ea45d-0b63-5f23-ae19-422f706bc94a", 00:14:46.366 "is_configured": true, 00:14:46.366 "data_offset": 2048, 00:14:46.366 "data_size": 63488 00:14:46.366 }, 00:14:46.366 { 00:14:46.366 "name": "BaseBdev2", 00:14:46.366 "uuid": "882e3d36-62f8-5199-900e-786ad3a2cadd", 00:14:46.366 "is_configured": true, 00:14:46.366 "data_offset": 2048, 00:14:46.366 "data_size": 63488 00:14:46.366 }, 00:14:46.366 { 00:14:46.366 "name": "BaseBdev3", 00:14:46.366 "uuid": "a7c7cdf3-2059-5dbd-9c80-37cd7ff0f826", 00:14:46.366 "is_configured": true, 00:14:46.366 "data_offset": 2048, 00:14:46.366 "data_size": 63488 00:14:46.366 } 00:14:46.366 ] 00:14:46.366 }' 00:14:46.366 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.366 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.626 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:46.626 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.626 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:46.627 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:46.627 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.627 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.627 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.627 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.627 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.627 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.887 "name": "raid_bdev1", 00:14:46.887 "uuid": "10c29ace-1339-44ee-8a87-ea07fa9c7a85", 00:14:46.887 "strip_size_kb": 64, 00:14:46.887 "state": "online", 00:14:46.887 "raid_level": "raid5f", 00:14:46.887 "superblock": true, 00:14:46.887 "num_base_bdevs": 3, 00:14:46.887 "num_base_bdevs_discovered": 3, 00:14:46.887 "num_base_bdevs_operational": 3, 00:14:46.887 "base_bdevs_list": [ 00:14:46.887 { 00:14:46.887 "name": "spare", 00:14:46.887 "uuid": "948ea45d-0b63-5f23-ae19-422f706bc94a", 00:14:46.887 "is_configured": true, 00:14:46.887 "data_offset": 2048, 00:14:46.887 "data_size": 63488 00:14:46.887 }, 00:14:46.887 { 00:14:46.887 "name": "BaseBdev2", 00:14:46.887 "uuid": "882e3d36-62f8-5199-900e-786ad3a2cadd", 00:14:46.887 "is_configured": true, 00:14:46.887 "data_offset": 2048, 00:14:46.887 "data_size": 63488 00:14:46.887 }, 00:14:46.887 { 00:14:46.887 "name": "BaseBdev3", 00:14:46.887 "uuid": "a7c7cdf3-2059-5dbd-9c80-37cd7ff0f826", 00:14:46.887 "is_configured": true, 00:14:46.887 "data_offset": 2048, 00:14:46.887 "data_size": 63488 00:14:46.887 } 00:14:46.887 ] 00:14:46.887 }' 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.887 [2024-11-21 04:12:46.751564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.887 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.887 "name": "raid_bdev1", 00:14:46.887 "uuid": "10c29ace-1339-44ee-8a87-ea07fa9c7a85", 00:14:46.887 "strip_size_kb": 64, 00:14:46.887 "state": "online", 00:14:46.887 "raid_level": "raid5f", 00:14:46.887 "superblock": true, 00:14:46.887 "num_base_bdevs": 3, 00:14:46.887 "num_base_bdevs_discovered": 2, 00:14:46.887 "num_base_bdevs_operational": 2, 00:14:46.887 "base_bdevs_list": [ 00:14:46.887 { 00:14:46.887 "name": null, 00:14:46.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.887 "is_configured": false, 00:14:46.887 "data_offset": 0, 00:14:46.887 "data_size": 63488 00:14:46.887 }, 00:14:46.887 { 00:14:46.887 "name": "BaseBdev2", 00:14:46.887 "uuid": "882e3d36-62f8-5199-900e-786ad3a2cadd", 00:14:46.887 "is_configured": true, 00:14:46.887 "data_offset": 2048, 00:14:46.887 "data_size": 63488 00:14:46.887 }, 00:14:46.887 { 00:14:46.887 "name": "BaseBdev3", 00:14:46.887 "uuid": "a7c7cdf3-2059-5dbd-9c80-37cd7ff0f826", 00:14:46.887 "is_configured": true, 00:14:46.887 "data_offset": 2048, 00:14:46.887 "data_size": 63488 00:14:46.887 } 00:14:46.887 ] 00:14:46.887 }' 00:14:46.888 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.888 04:12:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.457 04:12:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:47.458 04:12:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.458 04:12:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.458 [2024-11-21 04:12:47.194915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:47.458 [2024-11-21 04:12:47.195100] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:47.458 [2024-11-21 04:12:47.195143] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:47.458 [2024-11-21 04:12:47.195187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:47.458 [2024-11-21 04:12:47.202973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043e20 00:14:47.458 04:12:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.458 04:12:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:47.458 [2024-11-21 04:12:47.205499] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:48.398 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.398 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.398 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.398 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.398 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.398 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.398 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.398 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.398 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.398 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.398 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.398 "name": "raid_bdev1", 00:14:48.398 "uuid": "10c29ace-1339-44ee-8a87-ea07fa9c7a85", 00:14:48.398 "strip_size_kb": 64, 00:14:48.398 "state": "online", 00:14:48.398 "raid_level": "raid5f", 00:14:48.398 "superblock": true, 00:14:48.398 "num_base_bdevs": 3, 00:14:48.398 "num_base_bdevs_discovered": 3, 00:14:48.398 "num_base_bdevs_operational": 3, 00:14:48.398 "process": { 00:14:48.398 "type": "rebuild", 00:14:48.398 "target": "spare", 00:14:48.398 "progress": { 00:14:48.398 "blocks": 20480, 00:14:48.398 "percent": 16 00:14:48.398 } 00:14:48.398 }, 00:14:48.398 "base_bdevs_list": [ 00:14:48.398 { 00:14:48.398 "name": "spare", 00:14:48.398 "uuid": "948ea45d-0b63-5f23-ae19-422f706bc94a", 00:14:48.398 "is_configured": true, 00:14:48.398 "data_offset": 2048, 00:14:48.398 "data_size": 63488 00:14:48.398 }, 00:14:48.398 { 00:14:48.398 "name": "BaseBdev2", 00:14:48.398 "uuid": "882e3d36-62f8-5199-900e-786ad3a2cadd", 00:14:48.398 "is_configured": true, 00:14:48.398 "data_offset": 2048, 00:14:48.398 "data_size": 63488 00:14:48.398 }, 00:14:48.398 { 00:14:48.398 "name": "BaseBdev3", 00:14:48.398 "uuid": "a7c7cdf3-2059-5dbd-9c80-37cd7ff0f826", 00:14:48.398 "is_configured": true, 00:14:48.398 "data_offset": 2048, 00:14:48.398 "data_size": 63488 00:14:48.398 } 00:14:48.398 ] 00:14:48.398 }' 00:14:48.398 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.398 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.398 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.398 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.398 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:48.398 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.398 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.398 [2024-11-21 04:12:48.368981] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.658 [2024-11-21 04:12:48.413612] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:48.658 [2024-11-21 04:12:48.413718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.658 [2024-11-21 04:12:48.413773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.658 [2024-11-21 04:12:48.413794] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:48.658 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.658 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:48.658 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.658 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.658 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.658 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.658 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:48.658 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.658 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.658 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.658 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.658 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.658 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.658 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.658 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.658 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.658 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.658 "name": "raid_bdev1", 00:14:48.658 "uuid": "10c29ace-1339-44ee-8a87-ea07fa9c7a85", 00:14:48.658 "strip_size_kb": 64, 00:14:48.658 "state": "online", 00:14:48.658 "raid_level": "raid5f", 00:14:48.658 "superblock": true, 00:14:48.658 "num_base_bdevs": 3, 00:14:48.658 "num_base_bdevs_discovered": 2, 00:14:48.658 "num_base_bdevs_operational": 2, 00:14:48.658 "base_bdevs_list": [ 00:14:48.658 { 00:14:48.658 "name": null, 00:14:48.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.658 "is_configured": false, 00:14:48.658 "data_offset": 0, 00:14:48.658 "data_size": 63488 00:14:48.658 }, 00:14:48.658 { 00:14:48.658 "name": "BaseBdev2", 00:14:48.659 "uuid": "882e3d36-62f8-5199-900e-786ad3a2cadd", 00:14:48.659 "is_configured": true, 00:14:48.659 "data_offset": 2048, 00:14:48.659 "data_size": 63488 00:14:48.659 }, 00:14:48.659 { 00:14:48.659 "name": "BaseBdev3", 00:14:48.659 "uuid": "a7c7cdf3-2059-5dbd-9c80-37cd7ff0f826", 00:14:48.659 "is_configured": true, 00:14:48.659 "data_offset": 2048, 00:14:48.659 "data_size": 63488 00:14:48.659 } 00:14:48.659 ] 00:14:48.659 }' 00:14:48.659 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.659 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.234 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:49.234 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.234 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.234 [2024-11-21 04:12:48.922435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:49.234 [2024-11-21 04:12:48.922557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.234 [2024-11-21 04:12:48.922597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:49.234 [2024-11-21 04:12:48.922624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.234 [2024-11-21 04:12:48.923235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.234 [2024-11-21 04:12:48.923296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:49.234 [2024-11-21 04:12:48.923430] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:49.234 [2024-11-21 04:12:48.923471] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:49.234 [2024-11-21 04:12:48.923553] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:49.234 [2024-11-21 04:12:48.923620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:49.234 [2024-11-21 04:12:48.929634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043ef0 00:14:49.234 spare 00:14:49.234 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.234 04:12:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:49.234 [2024-11-21 04:12:48.932152] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:50.213 04:12:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.213 04:12:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.213 04:12:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.213 04:12:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.213 04:12:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.213 04:12:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.213 04:12:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.213 04:12:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.213 04:12:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.213 04:12:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.213 04:12:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.213 "name": "raid_bdev1", 00:14:50.213 "uuid": "10c29ace-1339-44ee-8a87-ea07fa9c7a85", 00:14:50.213 "strip_size_kb": 64, 00:14:50.213 "state": "online", 00:14:50.213 "raid_level": "raid5f", 00:14:50.213 "superblock": true, 00:14:50.213 "num_base_bdevs": 3, 00:14:50.213 "num_base_bdevs_discovered": 3, 00:14:50.213 "num_base_bdevs_operational": 3, 00:14:50.213 "process": { 00:14:50.213 "type": "rebuild", 00:14:50.213 "target": "spare", 00:14:50.213 "progress": { 00:14:50.213 "blocks": 20480, 00:14:50.213 "percent": 16 00:14:50.213 } 00:14:50.213 }, 00:14:50.213 "base_bdevs_list": [ 00:14:50.213 { 00:14:50.213 "name": "spare", 00:14:50.213 "uuid": "948ea45d-0b63-5f23-ae19-422f706bc94a", 00:14:50.213 "is_configured": true, 00:14:50.213 "data_offset": 2048, 00:14:50.213 "data_size": 63488 00:14:50.213 }, 00:14:50.213 { 00:14:50.213 "name": "BaseBdev2", 00:14:50.213 "uuid": "882e3d36-62f8-5199-900e-786ad3a2cadd", 00:14:50.213 "is_configured": true, 00:14:50.213 "data_offset": 2048, 00:14:50.213 "data_size": 63488 00:14:50.213 }, 00:14:50.213 { 00:14:50.213 "name": "BaseBdev3", 00:14:50.213 "uuid": "a7c7cdf3-2059-5dbd-9c80-37cd7ff0f826", 00:14:50.213 "is_configured": true, 00:14:50.213 "data_offset": 2048, 00:14:50.213 "data_size": 63488 00:14:50.213 } 00:14:50.213 ] 00:14:50.213 }' 00:14:50.213 04:12:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.213 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.213 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.213 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.213 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:50.213 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.213 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.213 [2024-11-21 04:12:50.068117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:50.213 [2024-11-21 04:12:50.140212] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:50.213 [2024-11-21 04:12:50.140345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.213 [2024-11-21 04:12:50.140366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:50.213 [2024-11-21 04:12:50.140381] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:50.213 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.213 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:50.213 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.213 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.213 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.213 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.213 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:50.213 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.213 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.213 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.214 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.214 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.214 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.214 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.214 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.484 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.484 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.484 "name": "raid_bdev1", 00:14:50.484 "uuid": "10c29ace-1339-44ee-8a87-ea07fa9c7a85", 00:14:50.484 "strip_size_kb": 64, 00:14:50.484 "state": "online", 00:14:50.484 "raid_level": "raid5f", 00:14:50.484 "superblock": true, 00:14:50.484 "num_base_bdevs": 3, 00:14:50.484 "num_base_bdevs_discovered": 2, 00:14:50.484 "num_base_bdevs_operational": 2, 00:14:50.484 "base_bdevs_list": [ 00:14:50.484 { 00:14:50.484 "name": null, 00:14:50.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.484 "is_configured": false, 00:14:50.484 "data_offset": 0, 00:14:50.484 "data_size": 63488 00:14:50.484 }, 00:14:50.484 { 00:14:50.484 "name": "BaseBdev2", 00:14:50.484 "uuid": "882e3d36-62f8-5199-900e-786ad3a2cadd", 00:14:50.484 "is_configured": true, 00:14:50.484 "data_offset": 2048, 00:14:50.484 "data_size": 63488 00:14:50.484 }, 00:14:50.484 { 00:14:50.484 "name": "BaseBdev3", 00:14:50.484 "uuid": "a7c7cdf3-2059-5dbd-9c80-37cd7ff0f826", 00:14:50.484 "is_configured": true, 00:14:50.484 "data_offset": 2048, 00:14:50.484 "data_size": 63488 00:14:50.484 } 00:14:50.484 ] 00:14:50.484 }' 00:14:50.484 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.484 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.744 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:50.744 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.744 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:50.744 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:50.744 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.744 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.744 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.744 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.744 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.744 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.744 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.744 "name": "raid_bdev1", 00:14:50.744 "uuid": "10c29ace-1339-44ee-8a87-ea07fa9c7a85", 00:14:50.744 "strip_size_kb": 64, 00:14:50.744 "state": "online", 00:14:50.744 "raid_level": "raid5f", 00:14:50.744 "superblock": true, 00:14:50.744 "num_base_bdevs": 3, 00:14:50.744 "num_base_bdevs_discovered": 2, 00:14:50.744 "num_base_bdevs_operational": 2, 00:14:50.744 "base_bdevs_list": [ 00:14:50.744 { 00:14:50.744 "name": null, 00:14:50.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.744 "is_configured": false, 00:14:50.744 "data_offset": 0, 00:14:50.744 "data_size": 63488 00:14:50.744 }, 00:14:50.744 { 00:14:50.744 "name": "BaseBdev2", 00:14:50.744 "uuid": "882e3d36-62f8-5199-900e-786ad3a2cadd", 00:14:50.744 "is_configured": true, 00:14:50.744 "data_offset": 2048, 00:14:50.744 "data_size": 63488 00:14:50.744 }, 00:14:50.744 { 00:14:50.744 "name": "BaseBdev3", 00:14:50.744 "uuid": "a7c7cdf3-2059-5dbd-9c80-37cd7ff0f826", 00:14:50.744 "is_configured": true, 00:14:50.744 "data_offset": 2048, 00:14:50.744 "data_size": 63488 00:14:50.744 } 00:14:50.744 ] 00:14:50.744 }' 00:14:50.744 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.744 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:50.744 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.005 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:51.005 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:51.005 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.005 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.005 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.005 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:51.005 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.005 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.005 [2024-11-21 04:12:50.740620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:51.005 [2024-11-21 04:12:50.740742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.005 [2024-11-21 04:12:50.740789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:51.005 [2024-11-21 04:12:50.740846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.005 [2024-11-21 04:12:50.741334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.005 [2024-11-21 04:12:50.741393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:51.005 [2024-11-21 04:12:50.741495] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:51.005 [2024-11-21 04:12:50.741526] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:51.005 [2024-11-21 04:12:50.741535] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:51.005 [2024-11-21 04:12:50.741547] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:51.005 BaseBdev1 00:14:51.005 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.005 04:12:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:51.945 04:12:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:51.945 04:12:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.945 04:12:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.945 04:12:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.945 04:12:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.945 04:12:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:51.945 04:12:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.945 04:12:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.945 04:12:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.945 04:12:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.945 04:12:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.945 04:12:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.945 04:12:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.945 04:12:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.945 04:12:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.946 04:12:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.946 "name": "raid_bdev1", 00:14:51.946 "uuid": "10c29ace-1339-44ee-8a87-ea07fa9c7a85", 00:14:51.946 "strip_size_kb": 64, 00:14:51.946 "state": "online", 00:14:51.946 "raid_level": "raid5f", 00:14:51.946 "superblock": true, 00:14:51.946 "num_base_bdevs": 3, 00:14:51.946 "num_base_bdevs_discovered": 2, 00:14:51.946 "num_base_bdevs_operational": 2, 00:14:51.946 "base_bdevs_list": [ 00:14:51.946 { 00:14:51.946 "name": null, 00:14:51.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.946 "is_configured": false, 00:14:51.946 "data_offset": 0, 00:14:51.946 "data_size": 63488 00:14:51.946 }, 00:14:51.946 { 00:14:51.946 "name": "BaseBdev2", 00:14:51.946 "uuid": "882e3d36-62f8-5199-900e-786ad3a2cadd", 00:14:51.946 "is_configured": true, 00:14:51.946 "data_offset": 2048, 00:14:51.946 "data_size": 63488 00:14:51.946 }, 00:14:51.946 { 00:14:51.946 "name": "BaseBdev3", 00:14:51.946 "uuid": "a7c7cdf3-2059-5dbd-9c80-37cd7ff0f826", 00:14:51.946 "is_configured": true, 00:14:51.946 "data_offset": 2048, 00:14:51.946 "data_size": 63488 00:14:51.946 } 00:14:51.946 ] 00:14:51.946 }' 00:14:51.946 04:12:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.946 04:12:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.517 "name": "raid_bdev1", 00:14:52.517 "uuid": "10c29ace-1339-44ee-8a87-ea07fa9c7a85", 00:14:52.517 "strip_size_kb": 64, 00:14:52.517 "state": "online", 00:14:52.517 "raid_level": "raid5f", 00:14:52.517 "superblock": true, 00:14:52.517 "num_base_bdevs": 3, 00:14:52.517 "num_base_bdevs_discovered": 2, 00:14:52.517 "num_base_bdevs_operational": 2, 00:14:52.517 "base_bdevs_list": [ 00:14:52.517 { 00:14:52.517 "name": null, 00:14:52.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.517 "is_configured": false, 00:14:52.517 "data_offset": 0, 00:14:52.517 "data_size": 63488 00:14:52.517 }, 00:14:52.517 { 00:14:52.517 "name": "BaseBdev2", 00:14:52.517 "uuid": "882e3d36-62f8-5199-900e-786ad3a2cadd", 00:14:52.517 "is_configured": true, 00:14:52.517 "data_offset": 2048, 00:14:52.517 "data_size": 63488 00:14:52.517 }, 00:14:52.517 { 00:14:52.517 "name": "BaseBdev3", 00:14:52.517 "uuid": "a7c7cdf3-2059-5dbd-9c80-37cd7ff0f826", 00:14:52.517 "is_configured": true, 00:14:52.517 "data_offset": 2048, 00:14:52.517 "data_size": 63488 00:14:52.517 } 00:14:52.517 ] 00:14:52.517 }' 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.517 [2024-11-21 04:12:52.353870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.517 [2024-11-21 04:12:52.354079] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:52.517 [2024-11-21 04:12:52.354107] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:52.517 request: 00:14:52.517 { 00:14:52.517 "base_bdev": "BaseBdev1", 00:14:52.517 "raid_bdev": "raid_bdev1", 00:14:52.517 "method": "bdev_raid_add_base_bdev", 00:14:52.517 "req_id": 1 00:14:52.517 } 00:14:52.517 Got JSON-RPC error response 00:14:52.517 response: 00:14:52.517 { 00:14:52.517 "code": -22, 00:14:52.517 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:52.517 } 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:52.517 04:12:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:53.457 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:53.457 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.457 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.457 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.457 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.458 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:53.458 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.458 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.458 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.458 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.458 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.458 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.458 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.458 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.458 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.458 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.458 "name": "raid_bdev1", 00:14:53.458 "uuid": "10c29ace-1339-44ee-8a87-ea07fa9c7a85", 00:14:53.458 "strip_size_kb": 64, 00:14:53.458 "state": "online", 00:14:53.458 "raid_level": "raid5f", 00:14:53.458 "superblock": true, 00:14:53.458 "num_base_bdevs": 3, 00:14:53.458 "num_base_bdevs_discovered": 2, 00:14:53.458 "num_base_bdevs_operational": 2, 00:14:53.458 "base_bdevs_list": [ 00:14:53.458 { 00:14:53.458 "name": null, 00:14:53.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.458 "is_configured": false, 00:14:53.458 "data_offset": 0, 00:14:53.458 "data_size": 63488 00:14:53.458 }, 00:14:53.458 { 00:14:53.458 "name": "BaseBdev2", 00:14:53.458 "uuid": "882e3d36-62f8-5199-900e-786ad3a2cadd", 00:14:53.458 "is_configured": true, 00:14:53.458 "data_offset": 2048, 00:14:53.458 "data_size": 63488 00:14:53.458 }, 00:14:53.458 { 00:14:53.458 "name": "BaseBdev3", 00:14:53.458 "uuid": "a7c7cdf3-2059-5dbd-9c80-37cd7ff0f826", 00:14:53.458 "is_configured": true, 00:14:53.458 "data_offset": 2048, 00:14:53.458 "data_size": 63488 00:14:53.458 } 00:14:53.458 ] 00:14:53.458 }' 00:14:53.458 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.458 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.028 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:54.028 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.028 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:54.028 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:54.028 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.028 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.028 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.028 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.028 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.028 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.028 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.028 "name": "raid_bdev1", 00:14:54.028 "uuid": "10c29ace-1339-44ee-8a87-ea07fa9c7a85", 00:14:54.028 "strip_size_kb": 64, 00:14:54.028 "state": "online", 00:14:54.028 "raid_level": "raid5f", 00:14:54.028 "superblock": true, 00:14:54.028 "num_base_bdevs": 3, 00:14:54.028 "num_base_bdevs_discovered": 2, 00:14:54.028 "num_base_bdevs_operational": 2, 00:14:54.028 "base_bdevs_list": [ 00:14:54.028 { 00:14:54.028 "name": null, 00:14:54.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.028 "is_configured": false, 00:14:54.028 "data_offset": 0, 00:14:54.028 "data_size": 63488 00:14:54.028 }, 00:14:54.028 { 00:14:54.028 "name": "BaseBdev2", 00:14:54.028 "uuid": "882e3d36-62f8-5199-900e-786ad3a2cadd", 00:14:54.028 "is_configured": true, 00:14:54.028 "data_offset": 2048, 00:14:54.028 "data_size": 63488 00:14:54.028 }, 00:14:54.028 { 00:14:54.028 "name": "BaseBdev3", 00:14:54.028 "uuid": "a7c7cdf3-2059-5dbd-9c80-37cd7ff0f826", 00:14:54.028 "is_configured": true, 00:14:54.028 "data_offset": 2048, 00:14:54.028 "data_size": 63488 00:14:54.028 } 00:14:54.028 ] 00:14:54.028 }' 00:14:54.028 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.028 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:54.028 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.028 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:54.028 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 92564 00:14:54.028 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 92564 ']' 00:14:54.028 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 92564 00:14:54.028 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:54.028 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:54.028 04:12:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92564 00:14:54.289 04:12:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:54.289 04:12:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:54.289 killing process with pid 92564 00:14:54.289 04:12:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92564' 00:14:54.289 04:12:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 92564 00:14:54.289 Received shutdown signal, test time was about 60.000000 seconds 00:14:54.289 00:14:54.289 Latency(us) 00:14:54.289 [2024-11-21T04:12:54.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.289 [2024-11-21T04:12:54.262Z] =================================================================================================================== 00:14:54.289 [2024-11-21T04:12:54.262Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:54.289 04:12:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 92564 00:14:54.289 [2024-11-21 04:12:54.005555] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:54.289 [2024-11-21 04:12:54.005709] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.289 [2024-11-21 04:12:54.005815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:54.289 [2024-11-21 04:12:54.005861] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:14:54.289 [2024-11-21 04:12:54.079225] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:54.551 04:12:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:54.551 00:14:54.551 real 0m21.591s 00:14:54.551 user 0m27.760s 00:14:54.551 sys 0m2.916s 00:14:54.551 04:12:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:54.551 04:12:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.551 ************************************ 00:14:54.551 END TEST raid5f_rebuild_test_sb 00:14:54.551 ************************************ 00:14:54.551 04:12:54 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:54.551 04:12:54 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:14:54.551 04:12:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:54.551 04:12:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:54.551 04:12:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:54.551 ************************************ 00:14:54.551 START TEST raid5f_state_function_test 00:14:54.551 ************************************ 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=93299 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93299' 00:14:54.551 Process raid pid: 93299 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 93299 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 93299 ']' 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:54.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:54.551 04:12:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.812 [2024-11-21 04:12:54.572627] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:14:54.812 [2024-11-21 04:12:54.572741] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.812 [2024-11-21 04:12:54.705127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.812 [2024-11-21 04:12:54.744333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.072 [2024-11-21 04:12:54.821504] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.072 [2024-11-21 04:12:54.821541] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.642 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:55.642 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:55.642 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:55.642 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.642 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.642 [2024-11-21 04:12:55.401416] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:55.642 [2024-11-21 04:12:55.401552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:55.642 [2024-11-21 04:12:55.401586] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:55.642 [2024-11-21 04:12:55.401609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:55.642 [2024-11-21 04:12:55.401626] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:55.642 [2024-11-21 04:12:55.401687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:55.642 [2024-11-21 04:12:55.401708] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:55.642 [2024-11-21 04:12:55.401768] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:55.642 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.642 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:55.642 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.643 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.643 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.643 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.643 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.643 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.643 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.643 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.643 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.643 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.643 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.643 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.643 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.643 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.643 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.643 "name": "Existed_Raid", 00:14:55.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.643 "strip_size_kb": 64, 00:14:55.643 "state": "configuring", 00:14:55.643 "raid_level": "raid5f", 00:14:55.643 "superblock": false, 00:14:55.643 "num_base_bdevs": 4, 00:14:55.643 "num_base_bdevs_discovered": 0, 00:14:55.643 "num_base_bdevs_operational": 4, 00:14:55.643 "base_bdevs_list": [ 00:14:55.643 { 00:14:55.643 "name": "BaseBdev1", 00:14:55.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.643 "is_configured": false, 00:14:55.643 "data_offset": 0, 00:14:55.643 "data_size": 0 00:14:55.643 }, 00:14:55.643 { 00:14:55.643 "name": "BaseBdev2", 00:14:55.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.643 "is_configured": false, 00:14:55.643 "data_offset": 0, 00:14:55.643 "data_size": 0 00:14:55.643 }, 00:14:55.643 { 00:14:55.643 "name": "BaseBdev3", 00:14:55.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.643 "is_configured": false, 00:14:55.643 "data_offset": 0, 00:14:55.643 "data_size": 0 00:14:55.643 }, 00:14:55.643 { 00:14:55.643 "name": "BaseBdev4", 00:14:55.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.643 "is_configured": false, 00:14:55.643 "data_offset": 0, 00:14:55.643 "data_size": 0 00:14:55.643 } 00:14:55.643 ] 00:14:55.643 }' 00:14:55.643 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.643 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.903 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:55.903 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.903 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.903 [2024-11-21 04:12:55.848503] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:55.903 [2024-11-21 04:12:55.848588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:14:55.904 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.904 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:55.904 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.904 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.904 [2024-11-21 04:12:55.860524] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:55.904 [2024-11-21 04:12:55.860617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:55.904 [2024-11-21 04:12:55.860643] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:55.904 [2024-11-21 04:12:55.860665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:55.904 [2024-11-21 04:12:55.860673] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:55.904 [2024-11-21 04:12:55.860683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:55.904 [2024-11-21 04:12:55.860688] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:55.904 [2024-11-21 04:12:55.860697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:55.904 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.904 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:55.904 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.904 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.164 [2024-11-21 04:12:55.887683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.164 BaseBdev1 00:14:56.164 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.164 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:56.164 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:56.164 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:56.164 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:56.164 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:56.164 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:56.164 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:56.164 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.164 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.164 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.164 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:56.164 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.164 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.164 [ 00:14:56.164 { 00:14:56.164 "name": "BaseBdev1", 00:14:56.164 "aliases": [ 00:14:56.164 "2aec09b3-4733-4f84-af29-cd9c8e887ad3" 00:14:56.164 ], 00:14:56.164 "product_name": "Malloc disk", 00:14:56.164 "block_size": 512, 00:14:56.164 "num_blocks": 65536, 00:14:56.164 "uuid": "2aec09b3-4733-4f84-af29-cd9c8e887ad3", 00:14:56.164 "assigned_rate_limits": { 00:14:56.164 "rw_ios_per_sec": 0, 00:14:56.164 "rw_mbytes_per_sec": 0, 00:14:56.164 "r_mbytes_per_sec": 0, 00:14:56.165 "w_mbytes_per_sec": 0 00:14:56.165 }, 00:14:56.165 "claimed": true, 00:14:56.165 "claim_type": "exclusive_write", 00:14:56.165 "zoned": false, 00:14:56.165 "supported_io_types": { 00:14:56.165 "read": true, 00:14:56.165 "write": true, 00:14:56.165 "unmap": true, 00:14:56.165 "flush": true, 00:14:56.165 "reset": true, 00:14:56.165 "nvme_admin": false, 00:14:56.165 "nvme_io": false, 00:14:56.165 "nvme_io_md": false, 00:14:56.165 "write_zeroes": true, 00:14:56.165 "zcopy": true, 00:14:56.165 "get_zone_info": false, 00:14:56.165 "zone_management": false, 00:14:56.165 "zone_append": false, 00:14:56.165 "compare": false, 00:14:56.165 "compare_and_write": false, 00:14:56.165 "abort": true, 00:14:56.165 "seek_hole": false, 00:14:56.165 "seek_data": false, 00:14:56.165 "copy": true, 00:14:56.165 "nvme_iov_md": false 00:14:56.165 }, 00:14:56.165 "memory_domains": [ 00:14:56.165 { 00:14:56.165 "dma_device_id": "system", 00:14:56.165 "dma_device_type": 1 00:14:56.165 }, 00:14:56.165 { 00:14:56.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.165 "dma_device_type": 2 00:14:56.165 } 00:14:56.165 ], 00:14:56.165 "driver_specific": {} 00:14:56.165 } 00:14:56.165 ] 00:14:56.165 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.165 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:56.165 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:56.165 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.165 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.165 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.165 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.165 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.165 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.165 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.165 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.165 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.165 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.165 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.165 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.165 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.165 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.165 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.165 "name": "Existed_Raid", 00:14:56.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.165 "strip_size_kb": 64, 00:14:56.165 "state": "configuring", 00:14:56.165 "raid_level": "raid5f", 00:14:56.165 "superblock": false, 00:14:56.165 "num_base_bdevs": 4, 00:14:56.165 "num_base_bdevs_discovered": 1, 00:14:56.165 "num_base_bdevs_operational": 4, 00:14:56.165 "base_bdevs_list": [ 00:14:56.165 { 00:14:56.165 "name": "BaseBdev1", 00:14:56.165 "uuid": "2aec09b3-4733-4f84-af29-cd9c8e887ad3", 00:14:56.165 "is_configured": true, 00:14:56.165 "data_offset": 0, 00:14:56.165 "data_size": 65536 00:14:56.165 }, 00:14:56.165 { 00:14:56.165 "name": "BaseBdev2", 00:14:56.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.165 "is_configured": false, 00:14:56.165 "data_offset": 0, 00:14:56.165 "data_size": 0 00:14:56.165 }, 00:14:56.165 { 00:14:56.165 "name": "BaseBdev3", 00:14:56.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.165 "is_configured": false, 00:14:56.165 "data_offset": 0, 00:14:56.165 "data_size": 0 00:14:56.165 }, 00:14:56.165 { 00:14:56.165 "name": "BaseBdev4", 00:14:56.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.165 "is_configured": false, 00:14:56.165 "data_offset": 0, 00:14:56.165 "data_size": 0 00:14:56.165 } 00:14:56.165 ] 00:14:56.165 }' 00:14:56.165 04:12:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.165 04:12:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.426 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:56.426 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.426 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.426 [2024-11-21 04:12:56.370859] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:56.426 [2024-11-21 04:12:56.370966] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:14:56.426 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.426 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:56.426 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.426 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.426 [2024-11-21 04:12:56.382885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.426 [2024-11-21 04:12:56.385077] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:56.426 [2024-11-21 04:12:56.385160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:56.426 [2024-11-21 04:12:56.385175] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:56.426 [2024-11-21 04:12:56.385199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:56.426 [2024-11-21 04:12:56.385205] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:56.426 [2024-11-21 04:12:56.385213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:56.426 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.426 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:56.426 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:56.426 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:56.426 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.426 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.426 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.426 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.426 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.426 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.426 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.426 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.426 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.426 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.426 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.426 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.426 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.687 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.687 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.687 "name": "Existed_Raid", 00:14:56.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.687 "strip_size_kb": 64, 00:14:56.687 "state": "configuring", 00:14:56.687 "raid_level": "raid5f", 00:14:56.687 "superblock": false, 00:14:56.687 "num_base_bdevs": 4, 00:14:56.687 "num_base_bdevs_discovered": 1, 00:14:56.687 "num_base_bdevs_operational": 4, 00:14:56.687 "base_bdevs_list": [ 00:14:56.687 { 00:14:56.687 "name": "BaseBdev1", 00:14:56.687 "uuid": "2aec09b3-4733-4f84-af29-cd9c8e887ad3", 00:14:56.687 "is_configured": true, 00:14:56.687 "data_offset": 0, 00:14:56.687 "data_size": 65536 00:14:56.687 }, 00:14:56.687 { 00:14:56.687 "name": "BaseBdev2", 00:14:56.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.687 "is_configured": false, 00:14:56.687 "data_offset": 0, 00:14:56.687 "data_size": 0 00:14:56.687 }, 00:14:56.687 { 00:14:56.687 "name": "BaseBdev3", 00:14:56.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.687 "is_configured": false, 00:14:56.687 "data_offset": 0, 00:14:56.687 "data_size": 0 00:14:56.687 }, 00:14:56.687 { 00:14:56.687 "name": "BaseBdev4", 00:14:56.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.687 "is_configured": false, 00:14:56.687 "data_offset": 0, 00:14:56.687 "data_size": 0 00:14:56.687 } 00:14:56.687 ] 00:14:56.687 }' 00:14:56.687 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.687 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.947 [2024-11-21 04:12:56.826633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:56.947 BaseBdev2 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.947 [ 00:14:56.947 { 00:14:56.947 "name": "BaseBdev2", 00:14:56.947 "aliases": [ 00:14:56.947 "a959d44e-7bb1-4c9f-8551-0f8c835bc2c7" 00:14:56.947 ], 00:14:56.947 "product_name": "Malloc disk", 00:14:56.947 "block_size": 512, 00:14:56.947 "num_blocks": 65536, 00:14:56.947 "uuid": "a959d44e-7bb1-4c9f-8551-0f8c835bc2c7", 00:14:56.947 "assigned_rate_limits": { 00:14:56.947 "rw_ios_per_sec": 0, 00:14:56.947 "rw_mbytes_per_sec": 0, 00:14:56.947 "r_mbytes_per_sec": 0, 00:14:56.947 "w_mbytes_per_sec": 0 00:14:56.947 }, 00:14:56.947 "claimed": true, 00:14:56.947 "claim_type": "exclusive_write", 00:14:56.947 "zoned": false, 00:14:56.947 "supported_io_types": { 00:14:56.947 "read": true, 00:14:56.947 "write": true, 00:14:56.947 "unmap": true, 00:14:56.947 "flush": true, 00:14:56.947 "reset": true, 00:14:56.947 "nvme_admin": false, 00:14:56.947 "nvme_io": false, 00:14:56.947 "nvme_io_md": false, 00:14:56.947 "write_zeroes": true, 00:14:56.947 "zcopy": true, 00:14:56.947 "get_zone_info": false, 00:14:56.947 "zone_management": false, 00:14:56.947 "zone_append": false, 00:14:56.947 "compare": false, 00:14:56.947 "compare_and_write": false, 00:14:56.947 "abort": true, 00:14:56.947 "seek_hole": false, 00:14:56.947 "seek_data": false, 00:14:56.947 "copy": true, 00:14:56.947 "nvme_iov_md": false 00:14:56.947 }, 00:14:56.947 "memory_domains": [ 00:14:56.947 { 00:14:56.947 "dma_device_id": "system", 00:14:56.947 "dma_device_type": 1 00:14:56.947 }, 00:14:56.947 { 00:14:56.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.947 "dma_device_type": 2 00:14:56.947 } 00:14:56.947 ], 00:14:56.947 "driver_specific": {} 00:14:56.947 } 00:14:56.947 ] 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.947 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.948 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.948 "name": "Existed_Raid", 00:14:56.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.948 "strip_size_kb": 64, 00:14:56.948 "state": "configuring", 00:14:56.948 "raid_level": "raid5f", 00:14:56.948 "superblock": false, 00:14:56.948 "num_base_bdevs": 4, 00:14:56.948 "num_base_bdevs_discovered": 2, 00:14:56.948 "num_base_bdevs_operational": 4, 00:14:56.948 "base_bdevs_list": [ 00:14:56.948 { 00:14:56.948 "name": "BaseBdev1", 00:14:56.948 "uuid": "2aec09b3-4733-4f84-af29-cd9c8e887ad3", 00:14:56.948 "is_configured": true, 00:14:56.948 "data_offset": 0, 00:14:56.948 "data_size": 65536 00:14:56.948 }, 00:14:56.948 { 00:14:56.948 "name": "BaseBdev2", 00:14:56.948 "uuid": "a959d44e-7bb1-4c9f-8551-0f8c835bc2c7", 00:14:56.948 "is_configured": true, 00:14:56.948 "data_offset": 0, 00:14:56.948 "data_size": 65536 00:14:56.948 }, 00:14:56.948 { 00:14:56.948 "name": "BaseBdev3", 00:14:56.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.948 "is_configured": false, 00:14:56.948 "data_offset": 0, 00:14:56.948 "data_size": 0 00:14:56.948 }, 00:14:56.948 { 00:14:56.948 "name": "BaseBdev4", 00:14:56.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.948 "is_configured": false, 00:14:56.948 "data_offset": 0, 00:14:56.948 "data_size": 0 00:14:56.948 } 00:14:56.948 ] 00:14:56.948 }' 00:14:56.948 04:12:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.948 04:12:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.519 [2024-11-21 04:12:57.271358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:57.519 BaseBdev3 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.519 [ 00:14:57.519 { 00:14:57.519 "name": "BaseBdev3", 00:14:57.519 "aliases": [ 00:14:57.519 "bbfa2893-7fce-47dc-8b2f-3245d69c82cd" 00:14:57.519 ], 00:14:57.519 "product_name": "Malloc disk", 00:14:57.519 "block_size": 512, 00:14:57.519 "num_blocks": 65536, 00:14:57.519 "uuid": "bbfa2893-7fce-47dc-8b2f-3245d69c82cd", 00:14:57.519 "assigned_rate_limits": { 00:14:57.519 "rw_ios_per_sec": 0, 00:14:57.519 "rw_mbytes_per_sec": 0, 00:14:57.519 "r_mbytes_per_sec": 0, 00:14:57.519 "w_mbytes_per_sec": 0 00:14:57.519 }, 00:14:57.519 "claimed": true, 00:14:57.519 "claim_type": "exclusive_write", 00:14:57.519 "zoned": false, 00:14:57.519 "supported_io_types": { 00:14:57.519 "read": true, 00:14:57.519 "write": true, 00:14:57.519 "unmap": true, 00:14:57.519 "flush": true, 00:14:57.519 "reset": true, 00:14:57.519 "nvme_admin": false, 00:14:57.519 "nvme_io": false, 00:14:57.519 "nvme_io_md": false, 00:14:57.519 "write_zeroes": true, 00:14:57.519 "zcopy": true, 00:14:57.519 "get_zone_info": false, 00:14:57.519 "zone_management": false, 00:14:57.519 "zone_append": false, 00:14:57.519 "compare": false, 00:14:57.519 "compare_and_write": false, 00:14:57.519 "abort": true, 00:14:57.519 "seek_hole": false, 00:14:57.519 "seek_data": false, 00:14:57.519 "copy": true, 00:14:57.519 "nvme_iov_md": false 00:14:57.519 }, 00:14:57.519 "memory_domains": [ 00:14:57.519 { 00:14:57.519 "dma_device_id": "system", 00:14:57.519 "dma_device_type": 1 00:14:57.519 }, 00:14:57.519 { 00:14:57.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.519 "dma_device_type": 2 00:14:57.519 } 00:14:57.519 ], 00:14:57.519 "driver_specific": {} 00:14:57.519 } 00:14:57.519 ] 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.519 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.520 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.520 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.520 "name": "Existed_Raid", 00:14:57.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.520 "strip_size_kb": 64, 00:14:57.520 "state": "configuring", 00:14:57.520 "raid_level": "raid5f", 00:14:57.520 "superblock": false, 00:14:57.520 "num_base_bdevs": 4, 00:14:57.520 "num_base_bdevs_discovered": 3, 00:14:57.520 "num_base_bdevs_operational": 4, 00:14:57.520 "base_bdevs_list": [ 00:14:57.520 { 00:14:57.520 "name": "BaseBdev1", 00:14:57.520 "uuid": "2aec09b3-4733-4f84-af29-cd9c8e887ad3", 00:14:57.520 "is_configured": true, 00:14:57.520 "data_offset": 0, 00:14:57.520 "data_size": 65536 00:14:57.520 }, 00:14:57.520 { 00:14:57.520 "name": "BaseBdev2", 00:14:57.520 "uuid": "a959d44e-7bb1-4c9f-8551-0f8c835bc2c7", 00:14:57.520 "is_configured": true, 00:14:57.520 "data_offset": 0, 00:14:57.520 "data_size": 65536 00:14:57.520 }, 00:14:57.520 { 00:14:57.520 "name": "BaseBdev3", 00:14:57.520 "uuid": "bbfa2893-7fce-47dc-8b2f-3245d69c82cd", 00:14:57.520 "is_configured": true, 00:14:57.520 "data_offset": 0, 00:14:57.520 "data_size": 65536 00:14:57.520 }, 00:14:57.520 { 00:14:57.520 "name": "BaseBdev4", 00:14:57.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.520 "is_configured": false, 00:14:57.520 "data_offset": 0, 00:14:57.520 "data_size": 0 00:14:57.520 } 00:14:57.520 ] 00:14:57.520 }' 00:14:57.520 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.520 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.780 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:57.780 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.780 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.040 [2024-11-21 04:12:57.763245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:58.040 [2024-11-21 04:12:57.763381] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:58.040 [2024-11-21 04:12:57.763408] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:58.040 [2024-11-21 04:12:57.763808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:58.040 [2024-11-21 04:12:57.764440] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:58.040 [2024-11-21 04:12:57.764504] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:14:58.040 [2024-11-21 04:12:57.764804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.040 BaseBdev4 00:14:58.040 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.040 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:58.040 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:58.040 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:58.040 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:58.040 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:58.040 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:58.040 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:58.040 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.040 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.040 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.040 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:58.040 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.040 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.040 [ 00:14:58.040 { 00:14:58.040 "name": "BaseBdev4", 00:14:58.040 "aliases": [ 00:14:58.040 "45d2e4bd-b116-43c0-bcab-1030c0e6fbe1" 00:14:58.040 ], 00:14:58.040 "product_name": "Malloc disk", 00:14:58.040 "block_size": 512, 00:14:58.041 "num_blocks": 65536, 00:14:58.041 "uuid": "45d2e4bd-b116-43c0-bcab-1030c0e6fbe1", 00:14:58.041 "assigned_rate_limits": { 00:14:58.041 "rw_ios_per_sec": 0, 00:14:58.041 "rw_mbytes_per_sec": 0, 00:14:58.041 "r_mbytes_per_sec": 0, 00:14:58.041 "w_mbytes_per_sec": 0 00:14:58.041 }, 00:14:58.041 "claimed": true, 00:14:58.041 "claim_type": "exclusive_write", 00:14:58.041 "zoned": false, 00:14:58.041 "supported_io_types": { 00:14:58.041 "read": true, 00:14:58.041 "write": true, 00:14:58.041 "unmap": true, 00:14:58.041 "flush": true, 00:14:58.041 "reset": true, 00:14:58.041 "nvme_admin": false, 00:14:58.041 "nvme_io": false, 00:14:58.041 "nvme_io_md": false, 00:14:58.041 "write_zeroes": true, 00:14:58.041 "zcopy": true, 00:14:58.041 "get_zone_info": false, 00:14:58.041 "zone_management": false, 00:14:58.041 "zone_append": false, 00:14:58.041 "compare": false, 00:14:58.041 "compare_and_write": false, 00:14:58.041 "abort": true, 00:14:58.041 "seek_hole": false, 00:14:58.041 "seek_data": false, 00:14:58.041 "copy": true, 00:14:58.041 "nvme_iov_md": false 00:14:58.041 }, 00:14:58.041 "memory_domains": [ 00:14:58.041 { 00:14:58.041 "dma_device_id": "system", 00:14:58.041 "dma_device_type": 1 00:14:58.041 }, 00:14:58.041 { 00:14:58.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.041 "dma_device_type": 2 00:14:58.041 } 00:14:58.041 ], 00:14:58.041 "driver_specific": {} 00:14:58.041 } 00:14:58.041 ] 00:14:58.041 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.041 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:58.041 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:58.041 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:58.041 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:58.041 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.041 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.041 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.041 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.041 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.041 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.041 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.041 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.041 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.041 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.041 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.041 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.041 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.041 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.041 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.041 "name": "Existed_Raid", 00:14:58.041 "uuid": "7dc7400a-af20-4a30-8e11-4632c650a16c", 00:14:58.041 "strip_size_kb": 64, 00:14:58.041 "state": "online", 00:14:58.041 "raid_level": "raid5f", 00:14:58.041 "superblock": false, 00:14:58.041 "num_base_bdevs": 4, 00:14:58.041 "num_base_bdevs_discovered": 4, 00:14:58.041 "num_base_bdevs_operational": 4, 00:14:58.041 "base_bdevs_list": [ 00:14:58.041 { 00:14:58.041 "name": "BaseBdev1", 00:14:58.041 "uuid": "2aec09b3-4733-4f84-af29-cd9c8e887ad3", 00:14:58.041 "is_configured": true, 00:14:58.041 "data_offset": 0, 00:14:58.041 "data_size": 65536 00:14:58.041 }, 00:14:58.041 { 00:14:58.041 "name": "BaseBdev2", 00:14:58.041 "uuid": "a959d44e-7bb1-4c9f-8551-0f8c835bc2c7", 00:14:58.041 "is_configured": true, 00:14:58.041 "data_offset": 0, 00:14:58.041 "data_size": 65536 00:14:58.041 }, 00:14:58.041 { 00:14:58.041 "name": "BaseBdev3", 00:14:58.041 "uuid": "bbfa2893-7fce-47dc-8b2f-3245d69c82cd", 00:14:58.041 "is_configured": true, 00:14:58.041 "data_offset": 0, 00:14:58.041 "data_size": 65536 00:14:58.041 }, 00:14:58.041 { 00:14:58.041 "name": "BaseBdev4", 00:14:58.041 "uuid": "45d2e4bd-b116-43c0-bcab-1030c0e6fbe1", 00:14:58.041 "is_configured": true, 00:14:58.041 "data_offset": 0, 00:14:58.041 "data_size": 65536 00:14:58.041 } 00:14:58.041 ] 00:14:58.041 }' 00:14:58.041 04:12:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.041 04:12:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.613 [2024-11-21 04:12:58.307007] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:58.613 "name": "Existed_Raid", 00:14:58.613 "aliases": [ 00:14:58.613 "7dc7400a-af20-4a30-8e11-4632c650a16c" 00:14:58.613 ], 00:14:58.613 "product_name": "Raid Volume", 00:14:58.613 "block_size": 512, 00:14:58.613 "num_blocks": 196608, 00:14:58.613 "uuid": "7dc7400a-af20-4a30-8e11-4632c650a16c", 00:14:58.613 "assigned_rate_limits": { 00:14:58.613 "rw_ios_per_sec": 0, 00:14:58.613 "rw_mbytes_per_sec": 0, 00:14:58.613 "r_mbytes_per_sec": 0, 00:14:58.613 "w_mbytes_per_sec": 0 00:14:58.613 }, 00:14:58.613 "claimed": false, 00:14:58.613 "zoned": false, 00:14:58.613 "supported_io_types": { 00:14:58.613 "read": true, 00:14:58.613 "write": true, 00:14:58.613 "unmap": false, 00:14:58.613 "flush": false, 00:14:58.613 "reset": true, 00:14:58.613 "nvme_admin": false, 00:14:58.613 "nvme_io": false, 00:14:58.613 "nvme_io_md": false, 00:14:58.613 "write_zeroes": true, 00:14:58.613 "zcopy": false, 00:14:58.613 "get_zone_info": false, 00:14:58.613 "zone_management": false, 00:14:58.613 "zone_append": false, 00:14:58.613 "compare": false, 00:14:58.613 "compare_and_write": false, 00:14:58.613 "abort": false, 00:14:58.613 "seek_hole": false, 00:14:58.613 "seek_data": false, 00:14:58.613 "copy": false, 00:14:58.613 "nvme_iov_md": false 00:14:58.613 }, 00:14:58.613 "driver_specific": { 00:14:58.613 "raid": { 00:14:58.613 "uuid": "7dc7400a-af20-4a30-8e11-4632c650a16c", 00:14:58.613 "strip_size_kb": 64, 00:14:58.613 "state": "online", 00:14:58.613 "raid_level": "raid5f", 00:14:58.613 "superblock": false, 00:14:58.613 "num_base_bdevs": 4, 00:14:58.613 "num_base_bdevs_discovered": 4, 00:14:58.613 "num_base_bdevs_operational": 4, 00:14:58.613 "base_bdevs_list": [ 00:14:58.613 { 00:14:58.613 "name": "BaseBdev1", 00:14:58.613 "uuid": "2aec09b3-4733-4f84-af29-cd9c8e887ad3", 00:14:58.613 "is_configured": true, 00:14:58.613 "data_offset": 0, 00:14:58.613 "data_size": 65536 00:14:58.613 }, 00:14:58.613 { 00:14:58.613 "name": "BaseBdev2", 00:14:58.613 "uuid": "a959d44e-7bb1-4c9f-8551-0f8c835bc2c7", 00:14:58.613 "is_configured": true, 00:14:58.613 "data_offset": 0, 00:14:58.613 "data_size": 65536 00:14:58.613 }, 00:14:58.613 { 00:14:58.613 "name": "BaseBdev3", 00:14:58.613 "uuid": "bbfa2893-7fce-47dc-8b2f-3245d69c82cd", 00:14:58.613 "is_configured": true, 00:14:58.613 "data_offset": 0, 00:14:58.613 "data_size": 65536 00:14:58.613 }, 00:14:58.613 { 00:14:58.613 "name": "BaseBdev4", 00:14:58.613 "uuid": "45d2e4bd-b116-43c0-bcab-1030c0e6fbe1", 00:14:58.613 "is_configured": true, 00:14:58.613 "data_offset": 0, 00:14:58.613 "data_size": 65536 00:14:58.613 } 00:14:58.613 ] 00:14:58.613 } 00:14:58.613 } 00:14:58.613 }' 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:58.613 BaseBdev2 00:14:58.613 BaseBdev3 00:14:58.613 BaseBdev4' 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.613 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.614 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.614 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:58.614 04:12:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.614 04:12:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.614 04:12:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.614 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.614 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.614 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.874 [2024-11-21 04:12:58.622336] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.874 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.875 "name": "Existed_Raid", 00:14:58.875 "uuid": "7dc7400a-af20-4a30-8e11-4632c650a16c", 00:14:58.875 "strip_size_kb": 64, 00:14:58.875 "state": "online", 00:14:58.875 "raid_level": "raid5f", 00:14:58.875 "superblock": false, 00:14:58.875 "num_base_bdevs": 4, 00:14:58.875 "num_base_bdevs_discovered": 3, 00:14:58.875 "num_base_bdevs_operational": 3, 00:14:58.875 "base_bdevs_list": [ 00:14:58.875 { 00:14:58.875 "name": null, 00:14:58.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.875 "is_configured": false, 00:14:58.875 "data_offset": 0, 00:14:58.875 "data_size": 65536 00:14:58.875 }, 00:14:58.875 { 00:14:58.875 "name": "BaseBdev2", 00:14:58.875 "uuid": "a959d44e-7bb1-4c9f-8551-0f8c835bc2c7", 00:14:58.875 "is_configured": true, 00:14:58.875 "data_offset": 0, 00:14:58.875 "data_size": 65536 00:14:58.875 }, 00:14:58.875 { 00:14:58.875 "name": "BaseBdev3", 00:14:58.875 "uuid": "bbfa2893-7fce-47dc-8b2f-3245d69c82cd", 00:14:58.875 "is_configured": true, 00:14:58.875 "data_offset": 0, 00:14:58.875 "data_size": 65536 00:14:58.875 }, 00:14:58.875 { 00:14:58.875 "name": "BaseBdev4", 00:14:58.875 "uuid": "45d2e4bd-b116-43c0-bcab-1030c0e6fbe1", 00:14:58.875 "is_configured": true, 00:14:58.875 "data_offset": 0, 00:14:58.875 "data_size": 65536 00:14:58.875 } 00:14:58.875 ] 00:14:58.875 }' 00:14:58.875 04:12:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.875 04:12:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.135 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:59.135 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:59.135 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:59.135 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.135 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.135 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.135 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.394 [2024-11-21 04:12:59.114380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:59.394 [2024-11-21 04:12:59.114545] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:59.394 [2024-11-21 04:12:59.135292] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.394 [2024-11-21 04:12:59.195187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.394 [2024-11-21 04:12:59.254746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:59.394 [2024-11-21 04:12:59.254853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.394 BaseBdev2 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.394 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.655 [ 00:14:59.655 { 00:14:59.655 "name": "BaseBdev2", 00:14:59.655 "aliases": [ 00:14:59.655 "737e9d9d-24d8-4752-bbed-be8de2351e0d" 00:14:59.655 ], 00:14:59.655 "product_name": "Malloc disk", 00:14:59.655 "block_size": 512, 00:14:59.655 "num_blocks": 65536, 00:14:59.655 "uuid": "737e9d9d-24d8-4752-bbed-be8de2351e0d", 00:14:59.655 "assigned_rate_limits": { 00:14:59.655 "rw_ios_per_sec": 0, 00:14:59.655 "rw_mbytes_per_sec": 0, 00:14:59.655 "r_mbytes_per_sec": 0, 00:14:59.655 "w_mbytes_per_sec": 0 00:14:59.655 }, 00:14:59.655 "claimed": false, 00:14:59.655 "zoned": false, 00:14:59.655 "supported_io_types": { 00:14:59.655 "read": true, 00:14:59.655 "write": true, 00:14:59.655 "unmap": true, 00:14:59.655 "flush": true, 00:14:59.655 "reset": true, 00:14:59.655 "nvme_admin": false, 00:14:59.655 "nvme_io": false, 00:14:59.655 "nvme_io_md": false, 00:14:59.655 "write_zeroes": true, 00:14:59.655 "zcopy": true, 00:14:59.655 "get_zone_info": false, 00:14:59.655 "zone_management": false, 00:14:59.655 "zone_append": false, 00:14:59.655 "compare": false, 00:14:59.655 "compare_and_write": false, 00:14:59.655 "abort": true, 00:14:59.655 "seek_hole": false, 00:14:59.655 "seek_data": false, 00:14:59.655 "copy": true, 00:14:59.655 "nvme_iov_md": false 00:14:59.655 }, 00:14:59.655 "memory_domains": [ 00:14:59.655 { 00:14:59.655 "dma_device_id": "system", 00:14:59.655 "dma_device_type": 1 00:14:59.655 }, 00:14:59.655 { 00:14:59.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.655 "dma_device_type": 2 00:14:59.655 } 00:14:59.655 ], 00:14:59.655 "driver_specific": {} 00:14:59.655 } 00:14:59.655 ] 00:14:59.655 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.655 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:59.655 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:59.655 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:59.655 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:59.655 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.655 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.655 BaseBdev3 00:14:59.655 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.655 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:59.655 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:59.655 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:59.655 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:59.655 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:59.655 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:59.655 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:59.655 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.655 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.655 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.655 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:59.655 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.655 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.655 [ 00:14:59.655 { 00:14:59.655 "name": "BaseBdev3", 00:14:59.655 "aliases": [ 00:14:59.655 "12b9377d-991b-449e-a00b-7b2bf7e50b49" 00:14:59.655 ], 00:14:59.655 "product_name": "Malloc disk", 00:14:59.655 "block_size": 512, 00:14:59.655 "num_blocks": 65536, 00:14:59.655 "uuid": "12b9377d-991b-449e-a00b-7b2bf7e50b49", 00:14:59.655 "assigned_rate_limits": { 00:14:59.655 "rw_ios_per_sec": 0, 00:14:59.655 "rw_mbytes_per_sec": 0, 00:14:59.655 "r_mbytes_per_sec": 0, 00:14:59.655 "w_mbytes_per_sec": 0 00:14:59.655 }, 00:14:59.655 "claimed": false, 00:14:59.655 "zoned": false, 00:14:59.655 "supported_io_types": { 00:14:59.655 "read": true, 00:14:59.655 "write": true, 00:14:59.655 "unmap": true, 00:14:59.655 "flush": true, 00:14:59.655 "reset": true, 00:14:59.655 "nvme_admin": false, 00:14:59.655 "nvme_io": false, 00:14:59.655 "nvme_io_md": false, 00:14:59.655 "write_zeroes": true, 00:14:59.655 "zcopy": true, 00:14:59.655 "get_zone_info": false, 00:14:59.655 "zone_management": false, 00:14:59.655 "zone_append": false, 00:14:59.655 "compare": false, 00:14:59.655 "compare_and_write": false, 00:14:59.655 "abort": true, 00:14:59.655 "seek_hole": false, 00:14:59.655 "seek_data": false, 00:14:59.655 "copy": true, 00:14:59.655 "nvme_iov_md": false 00:14:59.655 }, 00:14:59.655 "memory_domains": [ 00:14:59.655 { 00:14:59.655 "dma_device_id": "system", 00:14:59.656 "dma_device_type": 1 00:14:59.656 }, 00:14:59.656 { 00:14:59.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.656 "dma_device_type": 2 00:14:59.656 } 00:14:59.656 ], 00:14:59.656 "driver_specific": {} 00:14:59.656 } 00:14:59.656 ] 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.656 BaseBdev4 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.656 [ 00:14:59.656 { 00:14:59.656 "name": "BaseBdev4", 00:14:59.656 "aliases": [ 00:14:59.656 "d0a5660a-e84c-4515-a599-68ed6b506ae7" 00:14:59.656 ], 00:14:59.656 "product_name": "Malloc disk", 00:14:59.656 "block_size": 512, 00:14:59.656 "num_blocks": 65536, 00:14:59.656 "uuid": "d0a5660a-e84c-4515-a599-68ed6b506ae7", 00:14:59.656 "assigned_rate_limits": { 00:14:59.656 "rw_ios_per_sec": 0, 00:14:59.656 "rw_mbytes_per_sec": 0, 00:14:59.656 "r_mbytes_per_sec": 0, 00:14:59.656 "w_mbytes_per_sec": 0 00:14:59.656 }, 00:14:59.656 "claimed": false, 00:14:59.656 "zoned": false, 00:14:59.656 "supported_io_types": { 00:14:59.656 "read": true, 00:14:59.656 "write": true, 00:14:59.656 "unmap": true, 00:14:59.656 "flush": true, 00:14:59.656 "reset": true, 00:14:59.656 "nvme_admin": false, 00:14:59.656 "nvme_io": false, 00:14:59.656 "nvme_io_md": false, 00:14:59.656 "write_zeroes": true, 00:14:59.656 "zcopy": true, 00:14:59.656 "get_zone_info": false, 00:14:59.656 "zone_management": false, 00:14:59.656 "zone_append": false, 00:14:59.656 "compare": false, 00:14:59.656 "compare_and_write": false, 00:14:59.656 "abort": true, 00:14:59.656 "seek_hole": false, 00:14:59.656 "seek_data": false, 00:14:59.656 "copy": true, 00:14:59.656 "nvme_iov_md": false 00:14:59.656 }, 00:14:59.656 "memory_domains": [ 00:14:59.656 { 00:14:59.656 "dma_device_id": "system", 00:14:59.656 "dma_device_type": 1 00:14:59.656 }, 00:14:59.656 { 00:14:59.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.656 "dma_device_type": 2 00:14:59.656 } 00:14:59.656 ], 00:14:59.656 "driver_specific": {} 00:14:59.656 } 00:14:59.656 ] 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.656 [2024-11-21 04:12:59.510661] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:59.656 [2024-11-21 04:12:59.510760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:59.656 [2024-11-21 04:12:59.510808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:59.656 [2024-11-21 04:12:59.512870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:59.656 [2024-11-21 04:12:59.512968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.656 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.656 "name": "Existed_Raid", 00:14:59.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.656 "strip_size_kb": 64, 00:14:59.656 "state": "configuring", 00:14:59.656 "raid_level": "raid5f", 00:14:59.657 "superblock": false, 00:14:59.657 "num_base_bdevs": 4, 00:14:59.657 "num_base_bdevs_discovered": 3, 00:14:59.657 "num_base_bdevs_operational": 4, 00:14:59.657 "base_bdevs_list": [ 00:14:59.657 { 00:14:59.657 "name": "BaseBdev1", 00:14:59.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.657 "is_configured": false, 00:14:59.657 "data_offset": 0, 00:14:59.657 "data_size": 0 00:14:59.657 }, 00:14:59.657 { 00:14:59.657 "name": "BaseBdev2", 00:14:59.657 "uuid": "737e9d9d-24d8-4752-bbed-be8de2351e0d", 00:14:59.657 "is_configured": true, 00:14:59.657 "data_offset": 0, 00:14:59.657 "data_size": 65536 00:14:59.657 }, 00:14:59.657 { 00:14:59.657 "name": "BaseBdev3", 00:14:59.657 "uuid": "12b9377d-991b-449e-a00b-7b2bf7e50b49", 00:14:59.657 "is_configured": true, 00:14:59.657 "data_offset": 0, 00:14:59.657 "data_size": 65536 00:14:59.657 }, 00:14:59.657 { 00:14:59.657 "name": "BaseBdev4", 00:14:59.657 "uuid": "d0a5660a-e84c-4515-a599-68ed6b506ae7", 00:14:59.657 "is_configured": true, 00:14:59.657 "data_offset": 0, 00:14:59.657 "data_size": 65536 00:14:59.657 } 00:14:59.657 ] 00:14:59.657 }' 00:14:59.657 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.657 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.227 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:00.227 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.227 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.227 [2024-11-21 04:12:59.945871] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:00.227 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.227 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:00.227 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.227 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.227 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.227 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.227 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:00.227 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.227 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.227 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.227 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.227 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.227 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.227 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.227 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.227 04:12:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.227 04:12:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.227 "name": "Existed_Raid", 00:15:00.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.227 "strip_size_kb": 64, 00:15:00.227 "state": "configuring", 00:15:00.227 "raid_level": "raid5f", 00:15:00.227 "superblock": false, 00:15:00.227 "num_base_bdevs": 4, 00:15:00.227 "num_base_bdevs_discovered": 2, 00:15:00.227 "num_base_bdevs_operational": 4, 00:15:00.227 "base_bdevs_list": [ 00:15:00.227 { 00:15:00.227 "name": "BaseBdev1", 00:15:00.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.228 "is_configured": false, 00:15:00.228 "data_offset": 0, 00:15:00.228 "data_size": 0 00:15:00.228 }, 00:15:00.228 { 00:15:00.228 "name": null, 00:15:00.228 "uuid": "737e9d9d-24d8-4752-bbed-be8de2351e0d", 00:15:00.228 "is_configured": false, 00:15:00.228 "data_offset": 0, 00:15:00.228 "data_size": 65536 00:15:00.228 }, 00:15:00.228 { 00:15:00.228 "name": "BaseBdev3", 00:15:00.228 "uuid": "12b9377d-991b-449e-a00b-7b2bf7e50b49", 00:15:00.228 "is_configured": true, 00:15:00.228 "data_offset": 0, 00:15:00.228 "data_size": 65536 00:15:00.228 }, 00:15:00.228 { 00:15:00.228 "name": "BaseBdev4", 00:15:00.228 "uuid": "d0a5660a-e84c-4515-a599-68ed6b506ae7", 00:15:00.228 "is_configured": true, 00:15:00.228 "data_offset": 0, 00:15:00.228 "data_size": 65536 00:15:00.228 } 00:15:00.228 ] 00:15:00.228 }' 00:15:00.228 04:13:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.228 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.487 04:13:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:00.487 04:13:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.487 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.487 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.487 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.487 04:13:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:00.487 04:13:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:00.488 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.488 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.748 [2024-11-21 04:13:00.477796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.748 BaseBdev1 00:15:00.748 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.748 04:13:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:00.748 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:00.748 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:00.748 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:00.748 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:00.748 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:00.748 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:00.748 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.748 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.748 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.748 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:00.748 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.748 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.748 [ 00:15:00.748 { 00:15:00.748 "name": "BaseBdev1", 00:15:00.748 "aliases": [ 00:15:00.748 "bfa33d27-b5bd-4948-bce2-ef47fb92708b" 00:15:00.748 ], 00:15:00.748 "product_name": "Malloc disk", 00:15:00.748 "block_size": 512, 00:15:00.748 "num_blocks": 65536, 00:15:00.748 "uuid": "bfa33d27-b5bd-4948-bce2-ef47fb92708b", 00:15:00.748 "assigned_rate_limits": { 00:15:00.748 "rw_ios_per_sec": 0, 00:15:00.748 "rw_mbytes_per_sec": 0, 00:15:00.748 "r_mbytes_per_sec": 0, 00:15:00.748 "w_mbytes_per_sec": 0 00:15:00.748 }, 00:15:00.748 "claimed": true, 00:15:00.748 "claim_type": "exclusive_write", 00:15:00.748 "zoned": false, 00:15:00.748 "supported_io_types": { 00:15:00.748 "read": true, 00:15:00.748 "write": true, 00:15:00.748 "unmap": true, 00:15:00.748 "flush": true, 00:15:00.748 "reset": true, 00:15:00.748 "nvme_admin": false, 00:15:00.748 "nvme_io": false, 00:15:00.748 "nvme_io_md": false, 00:15:00.748 "write_zeroes": true, 00:15:00.748 "zcopy": true, 00:15:00.748 "get_zone_info": false, 00:15:00.748 "zone_management": false, 00:15:00.748 "zone_append": false, 00:15:00.748 "compare": false, 00:15:00.748 "compare_and_write": false, 00:15:00.748 "abort": true, 00:15:00.749 "seek_hole": false, 00:15:00.749 "seek_data": false, 00:15:00.749 "copy": true, 00:15:00.749 "nvme_iov_md": false 00:15:00.749 }, 00:15:00.749 "memory_domains": [ 00:15:00.749 { 00:15:00.749 "dma_device_id": "system", 00:15:00.749 "dma_device_type": 1 00:15:00.749 }, 00:15:00.749 { 00:15:00.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.749 "dma_device_type": 2 00:15:00.749 } 00:15:00.749 ], 00:15:00.749 "driver_specific": {} 00:15:00.749 } 00:15:00.749 ] 00:15:00.749 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.749 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:00.749 04:13:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:00.749 04:13:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.749 04:13:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.749 04:13:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.749 04:13:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.749 04:13:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:00.749 04:13:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.749 04:13:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.749 04:13:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.749 04:13:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.749 04:13:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.749 04:13:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.749 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.749 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.749 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.749 04:13:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.749 "name": "Existed_Raid", 00:15:00.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.749 "strip_size_kb": 64, 00:15:00.749 "state": "configuring", 00:15:00.749 "raid_level": "raid5f", 00:15:00.749 "superblock": false, 00:15:00.749 "num_base_bdevs": 4, 00:15:00.749 "num_base_bdevs_discovered": 3, 00:15:00.749 "num_base_bdevs_operational": 4, 00:15:00.749 "base_bdevs_list": [ 00:15:00.749 { 00:15:00.749 "name": "BaseBdev1", 00:15:00.749 "uuid": "bfa33d27-b5bd-4948-bce2-ef47fb92708b", 00:15:00.749 "is_configured": true, 00:15:00.749 "data_offset": 0, 00:15:00.749 "data_size": 65536 00:15:00.749 }, 00:15:00.749 { 00:15:00.749 "name": null, 00:15:00.749 "uuid": "737e9d9d-24d8-4752-bbed-be8de2351e0d", 00:15:00.749 "is_configured": false, 00:15:00.749 "data_offset": 0, 00:15:00.749 "data_size": 65536 00:15:00.749 }, 00:15:00.749 { 00:15:00.749 "name": "BaseBdev3", 00:15:00.749 "uuid": "12b9377d-991b-449e-a00b-7b2bf7e50b49", 00:15:00.749 "is_configured": true, 00:15:00.749 "data_offset": 0, 00:15:00.749 "data_size": 65536 00:15:00.749 }, 00:15:00.749 { 00:15:00.749 "name": "BaseBdev4", 00:15:00.749 "uuid": "d0a5660a-e84c-4515-a599-68ed6b506ae7", 00:15:00.749 "is_configured": true, 00:15:00.749 "data_offset": 0, 00:15:00.749 "data_size": 65536 00:15:00.749 } 00:15:00.749 ] 00:15:00.749 }' 00:15:00.749 04:13:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.749 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.009 04:13:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.009 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.009 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.009 04:13:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:01.269 04:13:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.269 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:01.269 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:01.269 04:13:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.269 04:13:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.269 [2024-11-21 04:13:01.020936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:01.269 04:13:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.269 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:01.269 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.269 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.269 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.269 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.269 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.269 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.269 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.269 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.269 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.269 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.269 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.269 04:13:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.269 04:13:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.269 04:13:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.269 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.269 "name": "Existed_Raid", 00:15:01.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.269 "strip_size_kb": 64, 00:15:01.269 "state": "configuring", 00:15:01.269 "raid_level": "raid5f", 00:15:01.269 "superblock": false, 00:15:01.269 "num_base_bdevs": 4, 00:15:01.269 "num_base_bdevs_discovered": 2, 00:15:01.269 "num_base_bdevs_operational": 4, 00:15:01.269 "base_bdevs_list": [ 00:15:01.269 { 00:15:01.269 "name": "BaseBdev1", 00:15:01.269 "uuid": "bfa33d27-b5bd-4948-bce2-ef47fb92708b", 00:15:01.269 "is_configured": true, 00:15:01.269 "data_offset": 0, 00:15:01.269 "data_size": 65536 00:15:01.269 }, 00:15:01.269 { 00:15:01.269 "name": null, 00:15:01.269 "uuid": "737e9d9d-24d8-4752-bbed-be8de2351e0d", 00:15:01.269 "is_configured": false, 00:15:01.269 "data_offset": 0, 00:15:01.269 "data_size": 65536 00:15:01.269 }, 00:15:01.269 { 00:15:01.269 "name": null, 00:15:01.269 "uuid": "12b9377d-991b-449e-a00b-7b2bf7e50b49", 00:15:01.269 "is_configured": false, 00:15:01.269 "data_offset": 0, 00:15:01.269 "data_size": 65536 00:15:01.269 }, 00:15:01.269 { 00:15:01.269 "name": "BaseBdev4", 00:15:01.269 "uuid": "d0a5660a-e84c-4515-a599-68ed6b506ae7", 00:15:01.269 "is_configured": true, 00:15:01.269 "data_offset": 0, 00:15:01.269 "data_size": 65536 00:15:01.269 } 00:15:01.269 ] 00:15:01.269 }' 00:15:01.269 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.269 04:13:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.529 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.529 04:13:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.529 04:13:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.529 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:01.529 04:13:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.790 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:01.790 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:01.790 04:13:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.790 04:13:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.790 [2024-11-21 04:13:01.520082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:01.790 04:13:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.790 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:01.790 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.790 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.790 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.790 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.790 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.790 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.790 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.790 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.790 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.790 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.790 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.790 04:13:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.790 04:13:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.790 04:13:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.790 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.790 "name": "Existed_Raid", 00:15:01.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.790 "strip_size_kb": 64, 00:15:01.790 "state": "configuring", 00:15:01.790 "raid_level": "raid5f", 00:15:01.790 "superblock": false, 00:15:01.790 "num_base_bdevs": 4, 00:15:01.790 "num_base_bdevs_discovered": 3, 00:15:01.790 "num_base_bdevs_operational": 4, 00:15:01.790 "base_bdevs_list": [ 00:15:01.790 { 00:15:01.790 "name": "BaseBdev1", 00:15:01.790 "uuid": "bfa33d27-b5bd-4948-bce2-ef47fb92708b", 00:15:01.790 "is_configured": true, 00:15:01.790 "data_offset": 0, 00:15:01.790 "data_size": 65536 00:15:01.790 }, 00:15:01.790 { 00:15:01.790 "name": null, 00:15:01.790 "uuid": "737e9d9d-24d8-4752-bbed-be8de2351e0d", 00:15:01.790 "is_configured": false, 00:15:01.790 "data_offset": 0, 00:15:01.790 "data_size": 65536 00:15:01.790 }, 00:15:01.790 { 00:15:01.790 "name": "BaseBdev3", 00:15:01.790 "uuid": "12b9377d-991b-449e-a00b-7b2bf7e50b49", 00:15:01.790 "is_configured": true, 00:15:01.790 "data_offset": 0, 00:15:01.790 "data_size": 65536 00:15:01.790 }, 00:15:01.790 { 00:15:01.790 "name": "BaseBdev4", 00:15:01.790 "uuid": "d0a5660a-e84c-4515-a599-68ed6b506ae7", 00:15:01.790 "is_configured": true, 00:15:01.790 "data_offset": 0, 00:15:01.790 "data_size": 65536 00:15:01.790 } 00:15:01.790 ] 00:15:01.790 }' 00:15:01.790 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.790 04:13:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.050 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:02.050 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.050 04:13:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.050 04:13:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.050 04:13:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.050 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:02.050 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:02.050 04:13:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.050 04:13:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.050 [2024-11-21 04:13:01.967350] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:02.050 04:13:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.050 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:02.050 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.050 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.050 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.050 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.050 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.050 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.050 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.050 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.050 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.050 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.050 04:13:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.050 04:13:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.051 04:13:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.051 04:13:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.311 04:13:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.311 "name": "Existed_Raid", 00:15:02.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.311 "strip_size_kb": 64, 00:15:02.311 "state": "configuring", 00:15:02.311 "raid_level": "raid5f", 00:15:02.311 "superblock": false, 00:15:02.311 "num_base_bdevs": 4, 00:15:02.311 "num_base_bdevs_discovered": 2, 00:15:02.311 "num_base_bdevs_operational": 4, 00:15:02.311 "base_bdevs_list": [ 00:15:02.311 { 00:15:02.311 "name": null, 00:15:02.311 "uuid": "bfa33d27-b5bd-4948-bce2-ef47fb92708b", 00:15:02.311 "is_configured": false, 00:15:02.311 "data_offset": 0, 00:15:02.311 "data_size": 65536 00:15:02.311 }, 00:15:02.311 { 00:15:02.311 "name": null, 00:15:02.311 "uuid": "737e9d9d-24d8-4752-bbed-be8de2351e0d", 00:15:02.311 "is_configured": false, 00:15:02.311 "data_offset": 0, 00:15:02.311 "data_size": 65536 00:15:02.311 }, 00:15:02.311 { 00:15:02.311 "name": "BaseBdev3", 00:15:02.311 "uuid": "12b9377d-991b-449e-a00b-7b2bf7e50b49", 00:15:02.311 "is_configured": true, 00:15:02.311 "data_offset": 0, 00:15:02.311 "data_size": 65536 00:15:02.311 }, 00:15:02.311 { 00:15:02.311 "name": "BaseBdev4", 00:15:02.311 "uuid": "d0a5660a-e84c-4515-a599-68ed6b506ae7", 00:15:02.311 "is_configured": true, 00:15:02.311 "data_offset": 0, 00:15:02.311 "data_size": 65536 00:15:02.311 } 00:15:02.311 ] 00:15:02.311 }' 00:15:02.311 04:13:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.311 04:13:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.572 [2024-11-21 04:13:02.446497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.572 "name": "Existed_Raid", 00:15:02.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.572 "strip_size_kb": 64, 00:15:02.572 "state": "configuring", 00:15:02.572 "raid_level": "raid5f", 00:15:02.572 "superblock": false, 00:15:02.572 "num_base_bdevs": 4, 00:15:02.572 "num_base_bdevs_discovered": 3, 00:15:02.572 "num_base_bdevs_operational": 4, 00:15:02.572 "base_bdevs_list": [ 00:15:02.572 { 00:15:02.572 "name": null, 00:15:02.572 "uuid": "bfa33d27-b5bd-4948-bce2-ef47fb92708b", 00:15:02.572 "is_configured": false, 00:15:02.572 "data_offset": 0, 00:15:02.572 "data_size": 65536 00:15:02.572 }, 00:15:02.572 { 00:15:02.572 "name": "BaseBdev2", 00:15:02.572 "uuid": "737e9d9d-24d8-4752-bbed-be8de2351e0d", 00:15:02.572 "is_configured": true, 00:15:02.572 "data_offset": 0, 00:15:02.572 "data_size": 65536 00:15:02.572 }, 00:15:02.572 { 00:15:02.572 "name": "BaseBdev3", 00:15:02.572 "uuid": "12b9377d-991b-449e-a00b-7b2bf7e50b49", 00:15:02.572 "is_configured": true, 00:15:02.572 "data_offset": 0, 00:15:02.572 "data_size": 65536 00:15:02.572 }, 00:15:02.572 { 00:15:02.572 "name": "BaseBdev4", 00:15:02.572 "uuid": "d0a5660a-e84c-4515-a599-68ed6b506ae7", 00:15:02.572 "is_configured": true, 00:15:02.572 "data_offset": 0, 00:15:02.572 "data_size": 65536 00:15:02.572 } 00:15:02.572 ] 00:15:02.572 }' 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.572 04:13:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.143 04:13:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.143 04:13:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.143 04:13:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.143 04:13:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:03.143 04:13:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.143 04:13:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:03.143 04:13:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.143 04:13:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.143 04:13:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.143 04:13:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:03.143 04:13:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.143 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bfa33d27-b5bd-4948-bce2-ef47fb92708b 00:15:03.143 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.143 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.143 [2024-11-21 04:13:03.028952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:03.143 NewBaseBdev 00:15:03.143 [2024-11-21 04:13:03.029076] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:15:03.143 [2024-11-21 04:13:03.029088] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:03.143 [2024-11-21 04:13:03.029441] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:15:03.143 [2024-11-21 04:13:03.029938] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:15:03.143 [2024-11-21 04:13:03.029952] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:15:03.143 [2024-11-21 04:13:03.030141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.143 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.143 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:03.143 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:03.143 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:03.143 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:03.143 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:03.143 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:03.143 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:03.143 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.143 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.143 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.143 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:03.143 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.143 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.143 [ 00:15:03.143 { 00:15:03.143 "name": "NewBaseBdev", 00:15:03.143 "aliases": [ 00:15:03.143 "bfa33d27-b5bd-4948-bce2-ef47fb92708b" 00:15:03.143 ], 00:15:03.143 "product_name": "Malloc disk", 00:15:03.143 "block_size": 512, 00:15:03.143 "num_blocks": 65536, 00:15:03.143 "uuid": "bfa33d27-b5bd-4948-bce2-ef47fb92708b", 00:15:03.143 "assigned_rate_limits": { 00:15:03.143 "rw_ios_per_sec": 0, 00:15:03.143 "rw_mbytes_per_sec": 0, 00:15:03.143 "r_mbytes_per_sec": 0, 00:15:03.143 "w_mbytes_per_sec": 0 00:15:03.143 }, 00:15:03.143 "claimed": true, 00:15:03.143 "claim_type": "exclusive_write", 00:15:03.143 "zoned": false, 00:15:03.143 "supported_io_types": { 00:15:03.143 "read": true, 00:15:03.143 "write": true, 00:15:03.143 "unmap": true, 00:15:03.143 "flush": true, 00:15:03.143 "reset": true, 00:15:03.143 "nvme_admin": false, 00:15:03.143 "nvme_io": false, 00:15:03.143 "nvme_io_md": false, 00:15:03.143 "write_zeroes": true, 00:15:03.143 "zcopy": true, 00:15:03.143 "get_zone_info": false, 00:15:03.143 "zone_management": false, 00:15:03.143 "zone_append": false, 00:15:03.143 "compare": false, 00:15:03.143 "compare_and_write": false, 00:15:03.143 "abort": true, 00:15:03.143 "seek_hole": false, 00:15:03.143 "seek_data": false, 00:15:03.143 "copy": true, 00:15:03.143 "nvme_iov_md": false 00:15:03.143 }, 00:15:03.143 "memory_domains": [ 00:15:03.143 { 00:15:03.143 "dma_device_id": "system", 00:15:03.144 "dma_device_type": 1 00:15:03.144 }, 00:15:03.144 { 00:15:03.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.144 "dma_device_type": 2 00:15:03.144 } 00:15:03.144 ], 00:15:03.144 "driver_specific": {} 00:15:03.144 } 00:15:03.144 ] 00:15:03.144 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.144 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:03.144 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:03.144 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.144 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.144 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.144 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.144 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:03.144 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.144 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.144 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.144 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.144 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.144 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.144 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.144 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.144 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.144 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.144 "name": "Existed_Raid", 00:15:03.144 "uuid": "cd1ae4e2-ed9f-41d8-85a3-26f8e33b35ae", 00:15:03.144 "strip_size_kb": 64, 00:15:03.144 "state": "online", 00:15:03.144 "raid_level": "raid5f", 00:15:03.144 "superblock": false, 00:15:03.144 "num_base_bdevs": 4, 00:15:03.144 "num_base_bdevs_discovered": 4, 00:15:03.144 "num_base_bdevs_operational": 4, 00:15:03.144 "base_bdevs_list": [ 00:15:03.144 { 00:15:03.144 "name": "NewBaseBdev", 00:15:03.144 "uuid": "bfa33d27-b5bd-4948-bce2-ef47fb92708b", 00:15:03.144 "is_configured": true, 00:15:03.144 "data_offset": 0, 00:15:03.144 "data_size": 65536 00:15:03.144 }, 00:15:03.144 { 00:15:03.144 "name": "BaseBdev2", 00:15:03.144 "uuid": "737e9d9d-24d8-4752-bbed-be8de2351e0d", 00:15:03.144 "is_configured": true, 00:15:03.144 "data_offset": 0, 00:15:03.144 "data_size": 65536 00:15:03.144 }, 00:15:03.144 { 00:15:03.144 "name": "BaseBdev3", 00:15:03.144 "uuid": "12b9377d-991b-449e-a00b-7b2bf7e50b49", 00:15:03.144 "is_configured": true, 00:15:03.144 "data_offset": 0, 00:15:03.144 "data_size": 65536 00:15:03.144 }, 00:15:03.144 { 00:15:03.144 "name": "BaseBdev4", 00:15:03.144 "uuid": "d0a5660a-e84c-4515-a599-68ed6b506ae7", 00:15:03.144 "is_configured": true, 00:15:03.144 "data_offset": 0, 00:15:03.144 "data_size": 65536 00:15:03.144 } 00:15:03.144 ] 00:15:03.144 }' 00:15:03.144 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.144 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.714 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:03.714 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:03.714 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:03.714 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:03.714 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:03.714 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:03.714 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:03.714 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:03.714 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.714 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.714 [2024-11-21 04:13:03.492415] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.714 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.714 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:03.714 "name": "Existed_Raid", 00:15:03.714 "aliases": [ 00:15:03.714 "cd1ae4e2-ed9f-41d8-85a3-26f8e33b35ae" 00:15:03.714 ], 00:15:03.714 "product_name": "Raid Volume", 00:15:03.714 "block_size": 512, 00:15:03.714 "num_blocks": 196608, 00:15:03.714 "uuid": "cd1ae4e2-ed9f-41d8-85a3-26f8e33b35ae", 00:15:03.714 "assigned_rate_limits": { 00:15:03.714 "rw_ios_per_sec": 0, 00:15:03.714 "rw_mbytes_per_sec": 0, 00:15:03.714 "r_mbytes_per_sec": 0, 00:15:03.714 "w_mbytes_per_sec": 0 00:15:03.714 }, 00:15:03.714 "claimed": false, 00:15:03.714 "zoned": false, 00:15:03.714 "supported_io_types": { 00:15:03.714 "read": true, 00:15:03.714 "write": true, 00:15:03.714 "unmap": false, 00:15:03.714 "flush": false, 00:15:03.714 "reset": true, 00:15:03.714 "nvme_admin": false, 00:15:03.714 "nvme_io": false, 00:15:03.714 "nvme_io_md": false, 00:15:03.714 "write_zeroes": true, 00:15:03.714 "zcopy": false, 00:15:03.714 "get_zone_info": false, 00:15:03.714 "zone_management": false, 00:15:03.714 "zone_append": false, 00:15:03.714 "compare": false, 00:15:03.714 "compare_and_write": false, 00:15:03.714 "abort": false, 00:15:03.714 "seek_hole": false, 00:15:03.714 "seek_data": false, 00:15:03.714 "copy": false, 00:15:03.714 "nvme_iov_md": false 00:15:03.714 }, 00:15:03.714 "driver_specific": { 00:15:03.714 "raid": { 00:15:03.714 "uuid": "cd1ae4e2-ed9f-41d8-85a3-26f8e33b35ae", 00:15:03.714 "strip_size_kb": 64, 00:15:03.714 "state": "online", 00:15:03.714 "raid_level": "raid5f", 00:15:03.714 "superblock": false, 00:15:03.714 "num_base_bdevs": 4, 00:15:03.714 "num_base_bdevs_discovered": 4, 00:15:03.714 "num_base_bdevs_operational": 4, 00:15:03.714 "base_bdevs_list": [ 00:15:03.715 { 00:15:03.715 "name": "NewBaseBdev", 00:15:03.715 "uuid": "bfa33d27-b5bd-4948-bce2-ef47fb92708b", 00:15:03.715 "is_configured": true, 00:15:03.715 "data_offset": 0, 00:15:03.715 "data_size": 65536 00:15:03.715 }, 00:15:03.715 { 00:15:03.715 "name": "BaseBdev2", 00:15:03.715 "uuid": "737e9d9d-24d8-4752-bbed-be8de2351e0d", 00:15:03.715 "is_configured": true, 00:15:03.715 "data_offset": 0, 00:15:03.715 "data_size": 65536 00:15:03.715 }, 00:15:03.715 { 00:15:03.715 "name": "BaseBdev3", 00:15:03.715 "uuid": "12b9377d-991b-449e-a00b-7b2bf7e50b49", 00:15:03.715 "is_configured": true, 00:15:03.715 "data_offset": 0, 00:15:03.715 "data_size": 65536 00:15:03.715 }, 00:15:03.715 { 00:15:03.715 "name": "BaseBdev4", 00:15:03.715 "uuid": "d0a5660a-e84c-4515-a599-68ed6b506ae7", 00:15:03.715 "is_configured": true, 00:15:03.715 "data_offset": 0, 00:15:03.715 "data_size": 65536 00:15:03.715 } 00:15:03.715 ] 00:15:03.715 } 00:15:03.715 } 00:15:03.715 }' 00:15:03.715 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:03.715 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:03.715 BaseBdev2 00:15:03.715 BaseBdev3 00:15:03.715 BaseBdev4' 00:15:03.715 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.715 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:03.715 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.715 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:03.715 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.715 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.715 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.715 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.715 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.715 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.715 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.715 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.975 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:03.975 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.975 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.975 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.975 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.975 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.975 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.975 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.976 [2024-11-21 04:13:03.835631] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:03.976 [2024-11-21 04:13:03.835695] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:03.976 [2024-11-21 04:13:03.835801] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.976 [2024-11-21 04:13:03.836113] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:03.976 [2024-11-21 04:13:03.836164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 93299 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 93299 ']' 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 93299 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93299 00:15:03.976 killing process with pid 93299 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93299' 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 93299 00:15:03.976 [2024-11-21 04:13:03.882994] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:03.976 04:13:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 93299 00:15:04.236 [2024-11-21 04:13:03.961786] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:04.496 00:15:04.496 real 0m9.815s 00:15:04.496 user 0m16.432s 00:15:04.496 sys 0m2.211s 00:15:04.496 ************************************ 00:15:04.496 END TEST raid5f_state_function_test 00:15:04.496 ************************************ 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.496 04:13:04 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:15:04.496 04:13:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:04.496 04:13:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:04.496 04:13:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:04.496 ************************************ 00:15:04.496 START TEST raid5f_state_function_test_sb 00:15:04.496 ************************************ 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=93948 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93948' 00:15:04.496 Process raid pid: 93948 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 93948 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 93948 ']' 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:04.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.496 04:13:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:04.497 04:13:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.757 [2024-11-21 04:13:04.480442] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:15:04.757 [2024-11-21 04:13:04.480699] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.757 [2024-11-21 04:13:04.639100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.757 [2024-11-21 04:13:04.678972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.017 [2024-11-21 04:13:04.756035] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:05.017 [2024-11-21 04:13:04.756155] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:05.588 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:05.588 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:05.588 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:05.588 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.588 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.588 [2024-11-21 04:13:05.296231] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:05.588 [2024-11-21 04:13:05.296346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:05.588 [2024-11-21 04:13:05.296375] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:05.588 [2024-11-21 04:13:05.296397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:05.588 [2024-11-21 04:13:05.296414] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:05.588 [2024-11-21 04:13:05.296438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:05.588 [2024-11-21 04:13:05.296497] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:05.588 [2024-11-21 04:13:05.296534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:05.588 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.588 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:05.588 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.588 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.588 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.588 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.588 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:05.588 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.588 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.588 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.588 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.588 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.588 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.588 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.588 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.588 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.588 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.588 "name": "Existed_Raid", 00:15:05.588 "uuid": "3e343d6d-ba6e-4e1d-ab98-34b22568aec5", 00:15:05.588 "strip_size_kb": 64, 00:15:05.588 "state": "configuring", 00:15:05.588 "raid_level": "raid5f", 00:15:05.588 "superblock": true, 00:15:05.588 "num_base_bdevs": 4, 00:15:05.588 "num_base_bdevs_discovered": 0, 00:15:05.588 "num_base_bdevs_operational": 4, 00:15:05.588 "base_bdevs_list": [ 00:15:05.588 { 00:15:05.588 "name": "BaseBdev1", 00:15:05.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.588 "is_configured": false, 00:15:05.588 "data_offset": 0, 00:15:05.588 "data_size": 0 00:15:05.588 }, 00:15:05.588 { 00:15:05.589 "name": "BaseBdev2", 00:15:05.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.589 "is_configured": false, 00:15:05.589 "data_offset": 0, 00:15:05.589 "data_size": 0 00:15:05.589 }, 00:15:05.589 { 00:15:05.589 "name": "BaseBdev3", 00:15:05.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.589 "is_configured": false, 00:15:05.589 "data_offset": 0, 00:15:05.589 "data_size": 0 00:15:05.589 }, 00:15:05.589 { 00:15:05.589 "name": "BaseBdev4", 00:15:05.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.589 "is_configured": false, 00:15:05.589 "data_offset": 0, 00:15:05.589 "data_size": 0 00:15:05.589 } 00:15:05.589 ] 00:15:05.589 }' 00:15:05.589 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.589 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.849 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:05.849 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.849 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.849 [2024-11-21 04:13:05.747329] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:05.849 [2024-11-21 04:13:05.747412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:15:05.849 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.849 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:05.849 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.849 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.849 [2024-11-21 04:13:05.759345] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:05.849 [2024-11-21 04:13:05.759438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:05.849 [2024-11-21 04:13:05.759465] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:05.849 [2024-11-21 04:13:05.759487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:05.849 [2024-11-21 04:13:05.759504] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:05.849 [2024-11-21 04:13:05.759524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:05.849 [2024-11-21 04:13:05.759541] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:05.849 [2024-11-21 04:13:05.759609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:05.849 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.849 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:05.849 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.849 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.849 BaseBdev1 00:15:05.849 [2024-11-21 04:13:05.786409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:05.849 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.849 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:05.849 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:05.849 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:05.849 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:05.849 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:05.849 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:05.849 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:05.849 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.849 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.849 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.849 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:05.849 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.849 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.849 [ 00:15:05.849 { 00:15:05.849 "name": "BaseBdev1", 00:15:05.849 "aliases": [ 00:15:05.849 "cb53e2d5-8c90-487e-b950-17053a78fff6" 00:15:05.849 ], 00:15:05.849 "product_name": "Malloc disk", 00:15:05.849 "block_size": 512, 00:15:05.849 "num_blocks": 65536, 00:15:05.849 "uuid": "cb53e2d5-8c90-487e-b950-17053a78fff6", 00:15:05.849 "assigned_rate_limits": { 00:15:05.849 "rw_ios_per_sec": 0, 00:15:05.849 "rw_mbytes_per_sec": 0, 00:15:05.849 "r_mbytes_per_sec": 0, 00:15:05.849 "w_mbytes_per_sec": 0 00:15:05.849 }, 00:15:05.849 "claimed": true, 00:15:05.849 "claim_type": "exclusive_write", 00:15:05.849 "zoned": false, 00:15:05.849 "supported_io_types": { 00:15:05.849 "read": true, 00:15:05.849 "write": true, 00:15:05.849 "unmap": true, 00:15:05.849 "flush": true, 00:15:05.849 "reset": true, 00:15:05.849 "nvme_admin": false, 00:15:05.849 "nvme_io": false, 00:15:05.849 "nvme_io_md": false, 00:15:05.849 "write_zeroes": true, 00:15:05.849 "zcopy": true, 00:15:05.849 "get_zone_info": false, 00:15:05.849 "zone_management": false, 00:15:05.849 "zone_append": false, 00:15:05.849 "compare": false, 00:15:05.849 "compare_and_write": false, 00:15:05.849 "abort": true, 00:15:06.110 "seek_hole": false, 00:15:06.110 "seek_data": false, 00:15:06.110 "copy": true, 00:15:06.110 "nvme_iov_md": false 00:15:06.110 }, 00:15:06.110 "memory_domains": [ 00:15:06.110 { 00:15:06.110 "dma_device_id": "system", 00:15:06.110 "dma_device_type": 1 00:15:06.110 }, 00:15:06.110 { 00:15:06.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.110 "dma_device_type": 2 00:15:06.110 } 00:15:06.110 ], 00:15:06.110 "driver_specific": {} 00:15:06.110 } 00:15:06.110 ] 00:15:06.110 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.110 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:06.110 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:06.110 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.110 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.110 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.110 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.110 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:06.110 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.110 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.110 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.110 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.110 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.110 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.110 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.110 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.110 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.110 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.110 "name": "Existed_Raid", 00:15:06.110 "uuid": "ca973a3e-e338-424c-b801-880dd9eccfc1", 00:15:06.110 "strip_size_kb": 64, 00:15:06.110 "state": "configuring", 00:15:06.110 "raid_level": "raid5f", 00:15:06.110 "superblock": true, 00:15:06.110 "num_base_bdevs": 4, 00:15:06.110 "num_base_bdevs_discovered": 1, 00:15:06.110 "num_base_bdevs_operational": 4, 00:15:06.110 "base_bdevs_list": [ 00:15:06.110 { 00:15:06.110 "name": "BaseBdev1", 00:15:06.110 "uuid": "cb53e2d5-8c90-487e-b950-17053a78fff6", 00:15:06.110 "is_configured": true, 00:15:06.110 "data_offset": 2048, 00:15:06.110 "data_size": 63488 00:15:06.110 }, 00:15:06.110 { 00:15:06.110 "name": "BaseBdev2", 00:15:06.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.110 "is_configured": false, 00:15:06.110 "data_offset": 0, 00:15:06.110 "data_size": 0 00:15:06.110 }, 00:15:06.110 { 00:15:06.110 "name": "BaseBdev3", 00:15:06.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.110 "is_configured": false, 00:15:06.110 "data_offset": 0, 00:15:06.110 "data_size": 0 00:15:06.110 }, 00:15:06.110 { 00:15:06.110 "name": "BaseBdev4", 00:15:06.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.110 "is_configured": false, 00:15:06.110 "data_offset": 0, 00:15:06.110 "data_size": 0 00:15:06.110 } 00:15:06.110 ] 00:15:06.110 }' 00:15:06.110 04:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.110 04:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.371 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:06.371 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.371 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.371 [2024-11-21 04:13:06.313510] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:06.371 [2024-11-21 04:13:06.313611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:15:06.371 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.371 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:06.371 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.371 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.371 [2024-11-21 04:13:06.325551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:06.371 [2024-11-21 04:13:06.327668] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:06.371 [2024-11-21 04:13:06.327742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:06.371 [2024-11-21 04:13:06.327769] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:06.371 [2024-11-21 04:13:06.327789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:06.371 [2024-11-21 04:13:06.327806] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:06.371 [2024-11-21 04:13:06.327824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:06.371 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.371 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:06.371 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:06.371 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:06.371 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.371 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.371 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.371 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.371 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:06.371 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.371 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.371 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.371 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.371 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.371 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.371 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.371 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.631 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.631 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.631 "name": "Existed_Raid", 00:15:06.631 "uuid": "db646529-3816-4815-9a4d-ce74c5330bea", 00:15:06.631 "strip_size_kb": 64, 00:15:06.631 "state": "configuring", 00:15:06.631 "raid_level": "raid5f", 00:15:06.631 "superblock": true, 00:15:06.631 "num_base_bdevs": 4, 00:15:06.631 "num_base_bdevs_discovered": 1, 00:15:06.631 "num_base_bdevs_operational": 4, 00:15:06.631 "base_bdevs_list": [ 00:15:06.631 { 00:15:06.631 "name": "BaseBdev1", 00:15:06.631 "uuid": "cb53e2d5-8c90-487e-b950-17053a78fff6", 00:15:06.631 "is_configured": true, 00:15:06.631 "data_offset": 2048, 00:15:06.631 "data_size": 63488 00:15:06.631 }, 00:15:06.631 { 00:15:06.631 "name": "BaseBdev2", 00:15:06.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.631 "is_configured": false, 00:15:06.631 "data_offset": 0, 00:15:06.631 "data_size": 0 00:15:06.631 }, 00:15:06.631 { 00:15:06.631 "name": "BaseBdev3", 00:15:06.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.631 "is_configured": false, 00:15:06.631 "data_offset": 0, 00:15:06.631 "data_size": 0 00:15:06.631 }, 00:15:06.631 { 00:15:06.631 "name": "BaseBdev4", 00:15:06.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.631 "is_configured": false, 00:15:06.631 "data_offset": 0, 00:15:06.631 "data_size": 0 00:15:06.631 } 00:15:06.631 ] 00:15:06.631 }' 00:15:06.631 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.631 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.891 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.892 BaseBdev2 00:15:06.892 [2024-11-21 04:13:06.761758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.892 [ 00:15:06.892 { 00:15:06.892 "name": "BaseBdev2", 00:15:06.892 "aliases": [ 00:15:06.892 "f0ce97ee-7f36-4827-976e-67b4ba354135" 00:15:06.892 ], 00:15:06.892 "product_name": "Malloc disk", 00:15:06.892 "block_size": 512, 00:15:06.892 "num_blocks": 65536, 00:15:06.892 "uuid": "f0ce97ee-7f36-4827-976e-67b4ba354135", 00:15:06.892 "assigned_rate_limits": { 00:15:06.892 "rw_ios_per_sec": 0, 00:15:06.892 "rw_mbytes_per_sec": 0, 00:15:06.892 "r_mbytes_per_sec": 0, 00:15:06.892 "w_mbytes_per_sec": 0 00:15:06.892 }, 00:15:06.892 "claimed": true, 00:15:06.892 "claim_type": "exclusive_write", 00:15:06.892 "zoned": false, 00:15:06.892 "supported_io_types": { 00:15:06.892 "read": true, 00:15:06.892 "write": true, 00:15:06.892 "unmap": true, 00:15:06.892 "flush": true, 00:15:06.892 "reset": true, 00:15:06.892 "nvme_admin": false, 00:15:06.892 "nvme_io": false, 00:15:06.892 "nvme_io_md": false, 00:15:06.892 "write_zeroes": true, 00:15:06.892 "zcopy": true, 00:15:06.892 "get_zone_info": false, 00:15:06.892 "zone_management": false, 00:15:06.892 "zone_append": false, 00:15:06.892 "compare": false, 00:15:06.892 "compare_and_write": false, 00:15:06.892 "abort": true, 00:15:06.892 "seek_hole": false, 00:15:06.892 "seek_data": false, 00:15:06.892 "copy": true, 00:15:06.892 "nvme_iov_md": false 00:15:06.892 }, 00:15:06.892 "memory_domains": [ 00:15:06.892 { 00:15:06.892 "dma_device_id": "system", 00:15:06.892 "dma_device_type": 1 00:15:06.892 }, 00:15:06.892 { 00:15:06.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.892 "dma_device_type": 2 00:15:06.892 } 00:15:06.892 ], 00:15:06.892 "driver_specific": {} 00:15:06.892 } 00:15:06.892 ] 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.892 "name": "Existed_Raid", 00:15:06.892 "uuid": "db646529-3816-4815-9a4d-ce74c5330bea", 00:15:06.892 "strip_size_kb": 64, 00:15:06.892 "state": "configuring", 00:15:06.892 "raid_level": "raid5f", 00:15:06.892 "superblock": true, 00:15:06.892 "num_base_bdevs": 4, 00:15:06.892 "num_base_bdevs_discovered": 2, 00:15:06.892 "num_base_bdevs_operational": 4, 00:15:06.892 "base_bdevs_list": [ 00:15:06.892 { 00:15:06.892 "name": "BaseBdev1", 00:15:06.892 "uuid": "cb53e2d5-8c90-487e-b950-17053a78fff6", 00:15:06.892 "is_configured": true, 00:15:06.892 "data_offset": 2048, 00:15:06.892 "data_size": 63488 00:15:06.892 }, 00:15:06.892 { 00:15:06.892 "name": "BaseBdev2", 00:15:06.892 "uuid": "f0ce97ee-7f36-4827-976e-67b4ba354135", 00:15:06.892 "is_configured": true, 00:15:06.892 "data_offset": 2048, 00:15:06.892 "data_size": 63488 00:15:06.892 }, 00:15:06.892 { 00:15:06.892 "name": "BaseBdev3", 00:15:06.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.892 "is_configured": false, 00:15:06.892 "data_offset": 0, 00:15:06.892 "data_size": 0 00:15:06.892 }, 00:15:06.892 { 00:15:06.892 "name": "BaseBdev4", 00:15:06.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.892 "is_configured": false, 00:15:06.892 "data_offset": 0, 00:15:06.892 "data_size": 0 00:15:06.892 } 00:15:06.892 ] 00:15:06.892 }' 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.892 04:13:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.463 [2024-11-21 04:13:07.280521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:07.463 BaseBdev3 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.463 [ 00:15:07.463 { 00:15:07.463 "name": "BaseBdev3", 00:15:07.463 "aliases": [ 00:15:07.463 "3060c57d-9af8-41e2-8e14-6c452f5650fc" 00:15:07.463 ], 00:15:07.463 "product_name": "Malloc disk", 00:15:07.463 "block_size": 512, 00:15:07.463 "num_blocks": 65536, 00:15:07.463 "uuid": "3060c57d-9af8-41e2-8e14-6c452f5650fc", 00:15:07.463 "assigned_rate_limits": { 00:15:07.463 "rw_ios_per_sec": 0, 00:15:07.463 "rw_mbytes_per_sec": 0, 00:15:07.463 "r_mbytes_per_sec": 0, 00:15:07.463 "w_mbytes_per_sec": 0 00:15:07.463 }, 00:15:07.463 "claimed": true, 00:15:07.463 "claim_type": "exclusive_write", 00:15:07.463 "zoned": false, 00:15:07.463 "supported_io_types": { 00:15:07.463 "read": true, 00:15:07.463 "write": true, 00:15:07.463 "unmap": true, 00:15:07.463 "flush": true, 00:15:07.463 "reset": true, 00:15:07.463 "nvme_admin": false, 00:15:07.463 "nvme_io": false, 00:15:07.463 "nvme_io_md": false, 00:15:07.463 "write_zeroes": true, 00:15:07.463 "zcopy": true, 00:15:07.463 "get_zone_info": false, 00:15:07.463 "zone_management": false, 00:15:07.463 "zone_append": false, 00:15:07.463 "compare": false, 00:15:07.463 "compare_and_write": false, 00:15:07.463 "abort": true, 00:15:07.463 "seek_hole": false, 00:15:07.463 "seek_data": false, 00:15:07.463 "copy": true, 00:15:07.463 "nvme_iov_md": false 00:15:07.463 }, 00:15:07.463 "memory_domains": [ 00:15:07.463 { 00:15:07.463 "dma_device_id": "system", 00:15:07.463 "dma_device_type": 1 00:15:07.463 }, 00:15:07.463 { 00:15:07.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.463 "dma_device_type": 2 00:15:07.463 } 00:15:07.463 ], 00:15:07.463 "driver_specific": {} 00:15:07.463 } 00:15:07.463 ] 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.463 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.463 "name": "Existed_Raid", 00:15:07.463 "uuid": "db646529-3816-4815-9a4d-ce74c5330bea", 00:15:07.463 "strip_size_kb": 64, 00:15:07.463 "state": "configuring", 00:15:07.463 "raid_level": "raid5f", 00:15:07.463 "superblock": true, 00:15:07.463 "num_base_bdevs": 4, 00:15:07.463 "num_base_bdevs_discovered": 3, 00:15:07.463 "num_base_bdevs_operational": 4, 00:15:07.463 "base_bdevs_list": [ 00:15:07.463 { 00:15:07.463 "name": "BaseBdev1", 00:15:07.464 "uuid": "cb53e2d5-8c90-487e-b950-17053a78fff6", 00:15:07.464 "is_configured": true, 00:15:07.464 "data_offset": 2048, 00:15:07.464 "data_size": 63488 00:15:07.464 }, 00:15:07.464 { 00:15:07.464 "name": "BaseBdev2", 00:15:07.464 "uuid": "f0ce97ee-7f36-4827-976e-67b4ba354135", 00:15:07.464 "is_configured": true, 00:15:07.464 "data_offset": 2048, 00:15:07.464 "data_size": 63488 00:15:07.464 }, 00:15:07.464 { 00:15:07.464 "name": "BaseBdev3", 00:15:07.464 "uuid": "3060c57d-9af8-41e2-8e14-6c452f5650fc", 00:15:07.464 "is_configured": true, 00:15:07.464 "data_offset": 2048, 00:15:07.464 "data_size": 63488 00:15:07.464 }, 00:15:07.464 { 00:15:07.464 "name": "BaseBdev4", 00:15:07.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.464 "is_configured": false, 00:15:07.464 "data_offset": 0, 00:15:07.464 "data_size": 0 00:15:07.464 } 00:15:07.464 ] 00:15:07.464 }' 00:15:07.464 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.464 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.033 [2024-11-21 04:13:07.768451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:08.033 [2024-11-21 04:13:07.768775] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:08.033 [2024-11-21 04:13:07.768828] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:08.033 [2024-11-21 04:13:07.769182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:08.033 BaseBdev4 00:15:08.033 [2024-11-21 04:13:07.769791] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:08.033 [2024-11-21 04:13:07.769855] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:15:08.033 [2024-11-21 04:13:07.770038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.033 [ 00:15:08.033 { 00:15:08.033 "name": "BaseBdev4", 00:15:08.033 "aliases": [ 00:15:08.033 "57eeaf27-d6db-417d-ba23-b0ea1c13cd34" 00:15:08.033 ], 00:15:08.033 "product_name": "Malloc disk", 00:15:08.033 "block_size": 512, 00:15:08.033 "num_blocks": 65536, 00:15:08.033 "uuid": "57eeaf27-d6db-417d-ba23-b0ea1c13cd34", 00:15:08.033 "assigned_rate_limits": { 00:15:08.033 "rw_ios_per_sec": 0, 00:15:08.033 "rw_mbytes_per_sec": 0, 00:15:08.033 "r_mbytes_per_sec": 0, 00:15:08.033 "w_mbytes_per_sec": 0 00:15:08.033 }, 00:15:08.033 "claimed": true, 00:15:08.033 "claim_type": "exclusive_write", 00:15:08.033 "zoned": false, 00:15:08.033 "supported_io_types": { 00:15:08.033 "read": true, 00:15:08.033 "write": true, 00:15:08.033 "unmap": true, 00:15:08.033 "flush": true, 00:15:08.033 "reset": true, 00:15:08.033 "nvme_admin": false, 00:15:08.033 "nvme_io": false, 00:15:08.033 "nvme_io_md": false, 00:15:08.033 "write_zeroes": true, 00:15:08.033 "zcopy": true, 00:15:08.033 "get_zone_info": false, 00:15:08.033 "zone_management": false, 00:15:08.033 "zone_append": false, 00:15:08.033 "compare": false, 00:15:08.033 "compare_and_write": false, 00:15:08.033 "abort": true, 00:15:08.033 "seek_hole": false, 00:15:08.033 "seek_data": false, 00:15:08.033 "copy": true, 00:15:08.033 "nvme_iov_md": false 00:15:08.033 }, 00:15:08.033 "memory_domains": [ 00:15:08.033 { 00:15:08.033 "dma_device_id": "system", 00:15:08.033 "dma_device_type": 1 00:15:08.033 }, 00:15:08.033 { 00:15:08.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.033 "dma_device_type": 2 00:15:08.033 } 00:15:08.033 ], 00:15:08.033 "driver_specific": {} 00:15:08.033 } 00:15:08.033 ] 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.033 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.034 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.034 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.034 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.034 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.034 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.034 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.034 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.034 "name": "Existed_Raid", 00:15:08.034 "uuid": "db646529-3816-4815-9a4d-ce74c5330bea", 00:15:08.034 "strip_size_kb": 64, 00:15:08.034 "state": "online", 00:15:08.034 "raid_level": "raid5f", 00:15:08.034 "superblock": true, 00:15:08.034 "num_base_bdevs": 4, 00:15:08.034 "num_base_bdevs_discovered": 4, 00:15:08.034 "num_base_bdevs_operational": 4, 00:15:08.034 "base_bdevs_list": [ 00:15:08.034 { 00:15:08.034 "name": "BaseBdev1", 00:15:08.034 "uuid": "cb53e2d5-8c90-487e-b950-17053a78fff6", 00:15:08.034 "is_configured": true, 00:15:08.034 "data_offset": 2048, 00:15:08.034 "data_size": 63488 00:15:08.034 }, 00:15:08.034 { 00:15:08.034 "name": "BaseBdev2", 00:15:08.034 "uuid": "f0ce97ee-7f36-4827-976e-67b4ba354135", 00:15:08.034 "is_configured": true, 00:15:08.034 "data_offset": 2048, 00:15:08.034 "data_size": 63488 00:15:08.034 }, 00:15:08.034 { 00:15:08.034 "name": "BaseBdev3", 00:15:08.034 "uuid": "3060c57d-9af8-41e2-8e14-6c452f5650fc", 00:15:08.034 "is_configured": true, 00:15:08.034 "data_offset": 2048, 00:15:08.034 "data_size": 63488 00:15:08.034 }, 00:15:08.034 { 00:15:08.034 "name": "BaseBdev4", 00:15:08.034 "uuid": "57eeaf27-d6db-417d-ba23-b0ea1c13cd34", 00:15:08.034 "is_configured": true, 00:15:08.034 "data_offset": 2048, 00:15:08.034 "data_size": 63488 00:15:08.034 } 00:15:08.034 ] 00:15:08.034 }' 00:15:08.034 04:13:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.034 04:13:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.301 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:08.301 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:08.301 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:08.301 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:08.301 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:08.301 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:08.301 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:08.301 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:08.301 04:13:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.301 04:13:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.301 [2024-11-21 04:13:08.240396] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.301 04:13:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.577 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:08.577 "name": "Existed_Raid", 00:15:08.577 "aliases": [ 00:15:08.577 "db646529-3816-4815-9a4d-ce74c5330bea" 00:15:08.577 ], 00:15:08.577 "product_name": "Raid Volume", 00:15:08.577 "block_size": 512, 00:15:08.577 "num_blocks": 190464, 00:15:08.577 "uuid": "db646529-3816-4815-9a4d-ce74c5330bea", 00:15:08.577 "assigned_rate_limits": { 00:15:08.577 "rw_ios_per_sec": 0, 00:15:08.577 "rw_mbytes_per_sec": 0, 00:15:08.577 "r_mbytes_per_sec": 0, 00:15:08.577 "w_mbytes_per_sec": 0 00:15:08.577 }, 00:15:08.577 "claimed": false, 00:15:08.577 "zoned": false, 00:15:08.577 "supported_io_types": { 00:15:08.577 "read": true, 00:15:08.577 "write": true, 00:15:08.577 "unmap": false, 00:15:08.577 "flush": false, 00:15:08.577 "reset": true, 00:15:08.577 "nvme_admin": false, 00:15:08.577 "nvme_io": false, 00:15:08.577 "nvme_io_md": false, 00:15:08.577 "write_zeroes": true, 00:15:08.577 "zcopy": false, 00:15:08.577 "get_zone_info": false, 00:15:08.577 "zone_management": false, 00:15:08.577 "zone_append": false, 00:15:08.577 "compare": false, 00:15:08.577 "compare_and_write": false, 00:15:08.577 "abort": false, 00:15:08.577 "seek_hole": false, 00:15:08.577 "seek_data": false, 00:15:08.577 "copy": false, 00:15:08.577 "nvme_iov_md": false 00:15:08.577 }, 00:15:08.577 "driver_specific": { 00:15:08.577 "raid": { 00:15:08.577 "uuid": "db646529-3816-4815-9a4d-ce74c5330bea", 00:15:08.577 "strip_size_kb": 64, 00:15:08.577 "state": "online", 00:15:08.577 "raid_level": "raid5f", 00:15:08.577 "superblock": true, 00:15:08.577 "num_base_bdevs": 4, 00:15:08.577 "num_base_bdevs_discovered": 4, 00:15:08.577 "num_base_bdevs_operational": 4, 00:15:08.577 "base_bdevs_list": [ 00:15:08.577 { 00:15:08.577 "name": "BaseBdev1", 00:15:08.577 "uuid": "cb53e2d5-8c90-487e-b950-17053a78fff6", 00:15:08.577 "is_configured": true, 00:15:08.577 "data_offset": 2048, 00:15:08.577 "data_size": 63488 00:15:08.577 }, 00:15:08.577 { 00:15:08.577 "name": "BaseBdev2", 00:15:08.577 "uuid": "f0ce97ee-7f36-4827-976e-67b4ba354135", 00:15:08.577 "is_configured": true, 00:15:08.577 "data_offset": 2048, 00:15:08.577 "data_size": 63488 00:15:08.577 }, 00:15:08.577 { 00:15:08.577 "name": "BaseBdev3", 00:15:08.577 "uuid": "3060c57d-9af8-41e2-8e14-6c452f5650fc", 00:15:08.577 "is_configured": true, 00:15:08.577 "data_offset": 2048, 00:15:08.577 "data_size": 63488 00:15:08.577 }, 00:15:08.577 { 00:15:08.577 "name": "BaseBdev4", 00:15:08.577 "uuid": "57eeaf27-d6db-417d-ba23-b0ea1c13cd34", 00:15:08.577 "is_configured": true, 00:15:08.577 "data_offset": 2048, 00:15:08.577 "data_size": 63488 00:15:08.577 } 00:15:08.577 ] 00:15:08.577 } 00:15:08.577 } 00:15:08.577 }' 00:15:08.577 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:08.577 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:08.577 BaseBdev2 00:15:08.577 BaseBdev3 00:15:08.577 BaseBdev4' 00:15:08.577 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.577 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:08.577 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.577 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:08.577 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.577 04:13:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.577 04:13:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.577 04:13:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.577 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.577 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.577 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.577 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:08.577 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.577 04:13:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.577 04:13:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.577 04:13:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.577 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.577 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.577 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.577 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:08.577 04:13:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.577 04:13:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.578 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.578 04:13:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.578 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.578 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.578 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.578 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:08.578 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.578 04:13:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.578 04:13:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.578 04:13:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.853 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.853 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.853 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:08.853 04:13:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.853 04:13:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.853 [2024-11-21 04:13:08.563683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:08.853 04:13:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.853 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:08.853 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:08.853 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:08.853 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:08.853 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:08.853 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:08.853 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.853 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.853 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.853 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.853 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:08.853 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.853 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.853 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.853 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.853 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.854 04:13:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.854 04:13:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.854 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.854 04:13:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.854 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.854 "name": "Existed_Raid", 00:15:08.854 "uuid": "db646529-3816-4815-9a4d-ce74c5330bea", 00:15:08.854 "strip_size_kb": 64, 00:15:08.854 "state": "online", 00:15:08.854 "raid_level": "raid5f", 00:15:08.854 "superblock": true, 00:15:08.854 "num_base_bdevs": 4, 00:15:08.854 "num_base_bdevs_discovered": 3, 00:15:08.854 "num_base_bdevs_operational": 3, 00:15:08.854 "base_bdevs_list": [ 00:15:08.854 { 00:15:08.854 "name": null, 00:15:08.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.854 "is_configured": false, 00:15:08.854 "data_offset": 0, 00:15:08.854 "data_size": 63488 00:15:08.854 }, 00:15:08.854 { 00:15:08.854 "name": "BaseBdev2", 00:15:08.854 "uuid": "f0ce97ee-7f36-4827-976e-67b4ba354135", 00:15:08.854 "is_configured": true, 00:15:08.854 "data_offset": 2048, 00:15:08.854 "data_size": 63488 00:15:08.854 }, 00:15:08.854 { 00:15:08.854 "name": "BaseBdev3", 00:15:08.854 "uuid": "3060c57d-9af8-41e2-8e14-6c452f5650fc", 00:15:08.854 "is_configured": true, 00:15:08.854 "data_offset": 2048, 00:15:08.854 "data_size": 63488 00:15:08.854 }, 00:15:08.854 { 00:15:08.854 "name": "BaseBdev4", 00:15:08.854 "uuid": "57eeaf27-d6db-417d-ba23-b0ea1c13cd34", 00:15:08.854 "is_configured": true, 00:15:08.854 "data_offset": 2048, 00:15:08.854 "data_size": 63488 00:15:08.854 } 00:15:08.854 ] 00:15:08.854 }' 00:15:08.854 04:13:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.854 04:13:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.114 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:09.114 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:09.114 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:09.114 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.114 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.114 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.114 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.374 [2024-11-21 04:13:09.095876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:09.374 [2024-11-21 04:13:09.096100] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:09.374 [2024-11-21 04:13:09.116534] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.374 [2024-11-21 04:13:09.176461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.374 [2024-11-21 04:13:09.256976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:09.374 [2024-11-21 04:13:09.257092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:09.374 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:09.375 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.375 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.375 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.375 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:09.375 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.375 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:09.375 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:09.375 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:09.375 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:09.375 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:09.375 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:09.375 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.375 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.635 BaseBdev2 00:15:09.635 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.635 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:09.635 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:09.635 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:09.635 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:09.635 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:09.635 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:09.635 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:09.635 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.635 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.635 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.635 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:09.635 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.635 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.635 [ 00:15:09.635 { 00:15:09.635 "name": "BaseBdev2", 00:15:09.635 "aliases": [ 00:15:09.635 "beef1c32-7750-47c1-bfe9-84f2bfdacd15" 00:15:09.635 ], 00:15:09.635 "product_name": "Malloc disk", 00:15:09.635 "block_size": 512, 00:15:09.635 "num_blocks": 65536, 00:15:09.635 "uuid": "beef1c32-7750-47c1-bfe9-84f2bfdacd15", 00:15:09.635 "assigned_rate_limits": { 00:15:09.635 "rw_ios_per_sec": 0, 00:15:09.635 "rw_mbytes_per_sec": 0, 00:15:09.635 "r_mbytes_per_sec": 0, 00:15:09.635 "w_mbytes_per_sec": 0 00:15:09.635 }, 00:15:09.635 "claimed": false, 00:15:09.635 "zoned": false, 00:15:09.635 "supported_io_types": { 00:15:09.635 "read": true, 00:15:09.635 "write": true, 00:15:09.635 "unmap": true, 00:15:09.635 "flush": true, 00:15:09.635 "reset": true, 00:15:09.635 "nvme_admin": false, 00:15:09.635 "nvme_io": false, 00:15:09.635 "nvme_io_md": false, 00:15:09.635 "write_zeroes": true, 00:15:09.635 "zcopy": true, 00:15:09.635 "get_zone_info": false, 00:15:09.635 "zone_management": false, 00:15:09.636 "zone_append": false, 00:15:09.636 "compare": false, 00:15:09.636 "compare_and_write": false, 00:15:09.636 "abort": true, 00:15:09.636 "seek_hole": false, 00:15:09.636 "seek_data": false, 00:15:09.636 "copy": true, 00:15:09.636 "nvme_iov_md": false 00:15:09.636 }, 00:15:09.636 "memory_domains": [ 00:15:09.636 { 00:15:09.636 "dma_device_id": "system", 00:15:09.636 "dma_device_type": 1 00:15:09.636 }, 00:15:09.636 { 00:15:09.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.636 "dma_device_type": 2 00:15:09.636 } 00:15:09.636 ], 00:15:09.636 "driver_specific": {} 00:15:09.636 } 00:15:09.636 ] 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.636 BaseBdev3 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.636 [ 00:15:09.636 { 00:15:09.636 "name": "BaseBdev3", 00:15:09.636 "aliases": [ 00:15:09.636 "b992a089-6c70-4e25-aad1-e777cb70f9a2" 00:15:09.636 ], 00:15:09.636 "product_name": "Malloc disk", 00:15:09.636 "block_size": 512, 00:15:09.636 "num_blocks": 65536, 00:15:09.636 "uuid": "b992a089-6c70-4e25-aad1-e777cb70f9a2", 00:15:09.636 "assigned_rate_limits": { 00:15:09.636 "rw_ios_per_sec": 0, 00:15:09.636 "rw_mbytes_per_sec": 0, 00:15:09.636 "r_mbytes_per_sec": 0, 00:15:09.636 "w_mbytes_per_sec": 0 00:15:09.636 }, 00:15:09.636 "claimed": false, 00:15:09.636 "zoned": false, 00:15:09.636 "supported_io_types": { 00:15:09.636 "read": true, 00:15:09.636 "write": true, 00:15:09.636 "unmap": true, 00:15:09.636 "flush": true, 00:15:09.636 "reset": true, 00:15:09.636 "nvme_admin": false, 00:15:09.636 "nvme_io": false, 00:15:09.636 "nvme_io_md": false, 00:15:09.636 "write_zeroes": true, 00:15:09.636 "zcopy": true, 00:15:09.636 "get_zone_info": false, 00:15:09.636 "zone_management": false, 00:15:09.636 "zone_append": false, 00:15:09.636 "compare": false, 00:15:09.636 "compare_and_write": false, 00:15:09.636 "abort": true, 00:15:09.636 "seek_hole": false, 00:15:09.636 "seek_data": false, 00:15:09.636 "copy": true, 00:15:09.636 "nvme_iov_md": false 00:15:09.636 }, 00:15:09.636 "memory_domains": [ 00:15:09.636 { 00:15:09.636 "dma_device_id": "system", 00:15:09.636 "dma_device_type": 1 00:15:09.636 }, 00:15:09.636 { 00:15:09.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.636 "dma_device_type": 2 00:15:09.636 } 00:15:09.636 ], 00:15:09.636 "driver_specific": {} 00:15:09.636 } 00:15:09.636 ] 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.636 BaseBdev4 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.636 [ 00:15:09.636 { 00:15:09.636 "name": "BaseBdev4", 00:15:09.636 "aliases": [ 00:15:09.636 "28f05c4c-b68e-4390-b54c-2018e84f2e0c" 00:15:09.636 ], 00:15:09.636 "product_name": "Malloc disk", 00:15:09.636 "block_size": 512, 00:15:09.636 "num_blocks": 65536, 00:15:09.636 "uuid": "28f05c4c-b68e-4390-b54c-2018e84f2e0c", 00:15:09.636 "assigned_rate_limits": { 00:15:09.636 "rw_ios_per_sec": 0, 00:15:09.636 "rw_mbytes_per_sec": 0, 00:15:09.636 "r_mbytes_per_sec": 0, 00:15:09.636 "w_mbytes_per_sec": 0 00:15:09.636 }, 00:15:09.636 "claimed": false, 00:15:09.636 "zoned": false, 00:15:09.636 "supported_io_types": { 00:15:09.636 "read": true, 00:15:09.636 "write": true, 00:15:09.636 "unmap": true, 00:15:09.636 "flush": true, 00:15:09.636 "reset": true, 00:15:09.636 "nvme_admin": false, 00:15:09.636 "nvme_io": false, 00:15:09.636 "nvme_io_md": false, 00:15:09.636 "write_zeroes": true, 00:15:09.636 "zcopy": true, 00:15:09.636 "get_zone_info": false, 00:15:09.636 "zone_management": false, 00:15:09.636 "zone_append": false, 00:15:09.636 "compare": false, 00:15:09.636 "compare_and_write": false, 00:15:09.636 "abort": true, 00:15:09.636 "seek_hole": false, 00:15:09.636 "seek_data": false, 00:15:09.636 "copy": true, 00:15:09.636 "nvme_iov_md": false 00:15:09.636 }, 00:15:09.636 "memory_domains": [ 00:15:09.636 { 00:15:09.636 "dma_device_id": "system", 00:15:09.636 "dma_device_type": 1 00:15:09.636 }, 00:15:09.636 { 00:15:09.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.636 "dma_device_type": 2 00:15:09.636 } 00:15:09.636 ], 00:15:09.636 "driver_specific": {} 00:15:09.636 } 00:15:09.636 ] 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.636 [2024-11-21 04:13:09.504749] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:09.636 [2024-11-21 04:13:09.504848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:09.636 [2024-11-21 04:13:09.504894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:09.636 [2024-11-21 04:13:09.507043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:09.636 [2024-11-21 04:13:09.507151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.636 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.637 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.637 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:09.637 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.637 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.637 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.637 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.637 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.637 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.637 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.637 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.637 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.637 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.637 "name": "Existed_Raid", 00:15:09.637 "uuid": "f3740b23-5e44-4036-8b0f-4791a5833b17", 00:15:09.637 "strip_size_kb": 64, 00:15:09.637 "state": "configuring", 00:15:09.637 "raid_level": "raid5f", 00:15:09.637 "superblock": true, 00:15:09.637 "num_base_bdevs": 4, 00:15:09.637 "num_base_bdevs_discovered": 3, 00:15:09.637 "num_base_bdevs_operational": 4, 00:15:09.637 "base_bdevs_list": [ 00:15:09.637 { 00:15:09.637 "name": "BaseBdev1", 00:15:09.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.637 "is_configured": false, 00:15:09.637 "data_offset": 0, 00:15:09.637 "data_size": 0 00:15:09.637 }, 00:15:09.637 { 00:15:09.637 "name": "BaseBdev2", 00:15:09.637 "uuid": "beef1c32-7750-47c1-bfe9-84f2bfdacd15", 00:15:09.637 "is_configured": true, 00:15:09.637 "data_offset": 2048, 00:15:09.637 "data_size": 63488 00:15:09.637 }, 00:15:09.637 { 00:15:09.637 "name": "BaseBdev3", 00:15:09.637 "uuid": "b992a089-6c70-4e25-aad1-e777cb70f9a2", 00:15:09.637 "is_configured": true, 00:15:09.637 "data_offset": 2048, 00:15:09.637 "data_size": 63488 00:15:09.637 }, 00:15:09.637 { 00:15:09.637 "name": "BaseBdev4", 00:15:09.637 "uuid": "28f05c4c-b68e-4390-b54c-2018e84f2e0c", 00:15:09.637 "is_configured": true, 00:15:09.637 "data_offset": 2048, 00:15:09.637 "data_size": 63488 00:15:09.637 } 00:15:09.637 ] 00:15:09.637 }' 00:15:09.637 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.637 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.207 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:10.207 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.207 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.207 [2024-11-21 04:13:09.964056] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:10.207 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.207 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:10.207 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.207 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.207 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.207 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.207 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:10.207 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.207 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.207 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.207 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.207 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.207 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.207 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.207 04:13:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.207 04:13:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.207 04:13:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.207 "name": "Existed_Raid", 00:15:10.207 "uuid": "f3740b23-5e44-4036-8b0f-4791a5833b17", 00:15:10.207 "strip_size_kb": 64, 00:15:10.207 "state": "configuring", 00:15:10.207 "raid_level": "raid5f", 00:15:10.207 "superblock": true, 00:15:10.207 "num_base_bdevs": 4, 00:15:10.207 "num_base_bdevs_discovered": 2, 00:15:10.207 "num_base_bdevs_operational": 4, 00:15:10.207 "base_bdevs_list": [ 00:15:10.207 { 00:15:10.207 "name": "BaseBdev1", 00:15:10.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.207 "is_configured": false, 00:15:10.207 "data_offset": 0, 00:15:10.207 "data_size": 0 00:15:10.207 }, 00:15:10.207 { 00:15:10.207 "name": null, 00:15:10.207 "uuid": "beef1c32-7750-47c1-bfe9-84f2bfdacd15", 00:15:10.207 "is_configured": false, 00:15:10.207 "data_offset": 0, 00:15:10.207 "data_size": 63488 00:15:10.207 }, 00:15:10.207 { 00:15:10.207 "name": "BaseBdev3", 00:15:10.207 "uuid": "b992a089-6c70-4e25-aad1-e777cb70f9a2", 00:15:10.207 "is_configured": true, 00:15:10.207 "data_offset": 2048, 00:15:10.207 "data_size": 63488 00:15:10.207 }, 00:15:10.207 { 00:15:10.207 "name": "BaseBdev4", 00:15:10.207 "uuid": "28f05c4c-b68e-4390-b54c-2018e84f2e0c", 00:15:10.207 "is_configured": true, 00:15:10.208 "data_offset": 2048, 00:15:10.208 "data_size": 63488 00:15:10.208 } 00:15:10.208 ] 00:15:10.208 }' 00:15:10.208 04:13:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.208 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.468 04:13:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.468 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.468 04:13:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:10.468 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.468 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.468 04:13:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:10.468 04:13:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:10.468 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.468 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.729 [2024-11-21 04:13:10.452023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.729 BaseBdev1 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.729 [ 00:15:10.729 { 00:15:10.729 "name": "BaseBdev1", 00:15:10.729 "aliases": [ 00:15:10.729 "874ada8c-de48-4aad-98ec-041f72a49143" 00:15:10.729 ], 00:15:10.729 "product_name": "Malloc disk", 00:15:10.729 "block_size": 512, 00:15:10.729 "num_blocks": 65536, 00:15:10.729 "uuid": "874ada8c-de48-4aad-98ec-041f72a49143", 00:15:10.729 "assigned_rate_limits": { 00:15:10.729 "rw_ios_per_sec": 0, 00:15:10.729 "rw_mbytes_per_sec": 0, 00:15:10.729 "r_mbytes_per_sec": 0, 00:15:10.729 "w_mbytes_per_sec": 0 00:15:10.729 }, 00:15:10.729 "claimed": true, 00:15:10.729 "claim_type": "exclusive_write", 00:15:10.729 "zoned": false, 00:15:10.729 "supported_io_types": { 00:15:10.729 "read": true, 00:15:10.729 "write": true, 00:15:10.729 "unmap": true, 00:15:10.729 "flush": true, 00:15:10.729 "reset": true, 00:15:10.729 "nvme_admin": false, 00:15:10.729 "nvme_io": false, 00:15:10.729 "nvme_io_md": false, 00:15:10.729 "write_zeroes": true, 00:15:10.729 "zcopy": true, 00:15:10.729 "get_zone_info": false, 00:15:10.729 "zone_management": false, 00:15:10.729 "zone_append": false, 00:15:10.729 "compare": false, 00:15:10.729 "compare_and_write": false, 00:15:10.729 "abort": true, 00:15:10.729 "seek_hole": false, 00:15:10.729 "seek_data": false, 00:15:10.729 "copy": true, 00:15:10.729 "nvme_iov_md": false 00:15:10.729 }, 00:15:10.729 "memory_domains": [ 00:15:10.729 { 00:15:10.729 "dma_device_id": "system", 00:15:10.729 "dma_device_type": 1 00:15:10.729 }, 00:15:10.729 { 00:15:10.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.729 "dma_device_type": 2 00:15:10.729 } 00:15:10.729 ], 00:15:10.729 "driver_specific": {} 00:15:10.729 } 00:15:10.729 ] 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.729 "name": "Existed_Raid", 00:15:10.729 "uuid": "f3740b23-5e44-4036-8b0f-4791a5833b17", 00:15:10.729 "strip_size_kb": 64, 00:15:10.729 "state": "configuring", 00:15:10.729 "raid_level": "raid5f", 00:15:10.729 "superblock": true, 00:15:10.729 "num_base_bdevs": 4, 00:15:10.729 "num_base_bdevs_discovered": 3, 00:15:10.729 "num_base_bdevs_operational": 4, 00:15:10.729 "base_bdevs_list": [ 00:15:10.729 { 00:15:10.729 "name": "BaseBdev1", 00:15:10.729 "uuid": "874ada8c-de48-4aad-98ec-041f72a49143", 00:15:10.729 "is_configured": true, 00:15:10.729 "data_offset": 2048, 00:15:10.729 "data_size": 63488 00:15:10.729 }, 00:15:10.729 { 00:15:10.729 "name": null, 00:15:10.729 "uuid": "beef1c32-7750-47c1-bfe9-84f2bfdacd15", 00:15:10.729 "is_configured": false, 00:15:10.729 "data_offset": 0, 00:15:10.729 "data_size": 63488 00:15:10.729 }, 00:15:10.729 { 00:15:10.729 "name": "BaseBdev3", 00:15:10.729 "uuid": "b992a089-6c70-4e25-aad1-e777cb70f9a2", 00:15:10.729 "is_configured": true, 00:15:10.729 "data_offset": 2048, 00:15:10.729 "data_size": 63488 00:15:10.729 }, 00:15:10.729 { 00:15:10.729 "name": "BaseBdev4", 00:15:10.729 "uuid": "28f05c4c-b68e-4390-b54c-2018e84f2e0c", 00:15:10.729 "is_configured": true, 00:15:10.729 "data_offset": 2048, 00:15:10.729 "data_size": 63488 00:15:10.729 } 00:15:10.729 ] 00:15:10.729 }' 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.729 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.300 04:13:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:11.300 04:13:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.300 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.300 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.300 04:13:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.300 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:11.300 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:11.300 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.300 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.300 [2024-11-21 04:13:11.027280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:11.300 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.300 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:11.300 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.300 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.300 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.300 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.300 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:11.300 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.300 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.300 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.300 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.300 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.300 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.300 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.300 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.300 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.300 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.300 "name": "Existed_Raid", 00:15:11.300 "uuid": "f3740b23-5e44-4036-8b0f-4791a5833b17", 00:15:11.300 "strip_size_kb": 64, 00:15:11.300 "state": "configuring", 00:15:11.300 "raid_level": "raid5f", 00:15:11.300 "superblock": true, 00:15:11.300 "num_base_bdevs": 4, 00:15:11.300 "num_base_bdevs_discovered": 2, 00:15:11.300 "num_base_bdevs_operational": 4, 00:15:11.300 "base_bdevs_list": [ 00:15:11.300 { 00:15:11.300 "name": "BaseBdev1", 00:15:11.300 "uuid": "874ada8c-de48-4aad-98ec-041f72a49143", 00:15:11.300 "is_configured": true, 00:15:11.300 "data_offset": 2048, 00:15:11.300 "data_size": 63488 00:15:11.300 }, 00:15:11.300 { 00:15:11.300 "name": null, 00:15:11.300 "uuid": "beef1c32-7750-47c1-bfe9-84f2bfdacd15", 00:15:11.300 "is_configured": false, 00:15:11.300 "data_offset": 0, 00:15:11.300 "data_size": 63488 00:15:11.300 }, 00:15:11.300 { 00:15:11.300 "name": null, 00:15:11.300 "uuid": "b992a089-6c70-4e25-aad1-e777cb70f9a2", 00:15:11.300 "is_configured": false, 00:15:11.300 "data_offset": 0, 00:15:11.300 "data_size": 63488 00:15:11.300 }, 00:15:11.300 { 00:15:11.300 "name": "BaseBdev4", 00:15:11.300 "uuid": "28f05c4c-b68e-4390-b54c-2018e84f2e0c", 00:15:11.300 "is_configured": true, 00:15:11.300 "data_offset": 2048, 00:15:11.300 "data_size": 63488 00:15:11.300 } 00:15:11.300 ] 00:15:11.300 }' 00:15:11.300 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.300 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.560 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.560 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:11.560 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.560 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.560 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.560 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:11.561 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:11.561 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.561 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.561 [2024-11-21 04:13:11.434564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:11.561 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.561 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:11.561 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.561 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.561 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.561 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.561 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:11.561 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.561 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.561 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.561 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.561 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.561 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.561 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.561 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.561 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.561 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.561 "name": "Existed_Raid", 00:15:11.561 "uuid": "f3740b23-5e44-4036-8b0f-4791a5833b17", 00:15:11.561 "strip_size_kb": 64, 00:15:11.561 "state": "configuring", 00:15:11.561 "raid_level": "raid5f", 00:15:11.561 "superblock": true, 00:15:11.561 "num_base_bdevs": 4, 00:15:11.561 "num_base_bdevs_discovered": 3, 00:15:11.561 "num_base_bdevs_operational": 4, 00:15:11.561 "base_bdevs_list": [ 00:15:11.561 { 00:15:11.561 "name": "BaseBdev1", 00:15:11.561 "uuid": "874ada8c-de48-4aad-98ec-041f72a49143", 00:15:11.561 "is_configured": true, 00:15:11.561 "data_offset": 2048, 00:15:11.561 "data_size": 63488 00:15:11.561 }, 00:15:11.561 { 00:15:11.561 "name": null, 00:15:11.561 "uuid": "beef1c32-7750-47c1-bfe9-84f2bfdacd15", 00:15:11.561 "is_configured": false, 00:15:11.561 "data_offset": 0, 00:15:11.561 "data_size": 63488 00:15:11.561 }, 00:15:11.561 { 00:15:11.561 "name": "BaseBdev3", 00:15:11.561 "uuid": "b992a089-6c70-4e25-aad1-e777cb70f9a2", 00:15:11.561 "is_configured": true, 00:15:11.561 "data_offset": 2048, 00:15:11.561 "data_size": 63488 00:15:11.561 }, 00:15:11.561 { 00:15:11.561 "name": "BaseBdev4", 00:15:11.561 "uuid": "28f05c4c-b68e-4390-b54c-2018e84f2e0c", 00:15:11.561 "is_configured": true, 00:15:11.561 "data_offset": 2048, 00:15:11.561 "data_size": 63488 00:15:11.561 } 00:15:11.561 ] 00:15:11.561 }' 00:15:11.561 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.561 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.132 [2024-11-21 04:13:11.857887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.132 "name": "Existed_Raid", 00:15:12.132 "uuid": "f3740b23-5e44-4036-8b0f-4791a5833b17", 00:15:12.132 "strip_size_kb": 64, 00:15:12.132 "state": "configuring", 00:15:12.132 "raid_level": "raid5f", 00:15:12.132 "superblock": true, 00:15:12.132 "num_base_bdevs": 4, 00:15:12.132 "num_base_bdevs_discovered": 2, 00:15:12.132 "num_base_bdevs_operational": 4, 00:15:12.132 "base_bdevs_list": [ 00:15:12.132 { 00:15:12.132 "name": null, 00:15:12.132 "uuid": "874ada8c-de48-4aad-98ec-041f72a49143", 00:15:12.132 "is_configured": false, 00:15:12.132 "data_offset": 0, 00:15:12.132 "data_size": 63488 00:15:12.132 }, 00:15:12.132 { 00:15:12.132 "name": null, 00:15:12.132 "uuid": "beef1c32-7750-47c1-bfe9-84f2bfdacd15", 00:15:12.132 "is_configured": false, 00:15:12.132 "data_offset": 0, 00:15:12.132 "data_size": 63488 00:15:12.132 }, 00:15:12.132 { 00:15:12.132 "name": "BaseBdev3", 00:15:12.132 "uuid": "b992a089-6c70-4e25-aad1-e777cb70f9a2", 00:15:12.132 "is_configured": true, 00:15:12.132 "data_offset": 2048, 00:15:12.132 "data_size": 63488 00:15:12.132 }, 00:15:12.132 { 00:15:12.132 "name": "BaseBdev4", 00:15:12.132 "uuid": "28f05c4c-b68e-4390-b54c-2018e84f2e0c", 00:15:12.132 "is_configured": true, 00:15:12.132 "data_offset": 2048, 00:15:12.132 "data_size": 63488 00:15:12.132 } 00:15:12.132 ] 00:15:12.132 }' 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.132 04:13:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.391 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.391 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:12.391 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.391 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.391 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.651 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:12.651 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:12.651 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.651 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.651 [2024-11-21 04:13:12.377099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:12.651 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.651 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:12.651 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.651 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.651 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.651 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.651 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:12.651 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.651 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.651 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.651 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.651 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.651 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.651 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.651 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.651 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.651 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.651 "name": "Existed_Raid", 00:15:12.651 "uuid": "f3740b23-5e44-4036-8b0f-4791a5833b17", 00:15:12.651 "strip_size_kb": 64, 00:15:12.651 "state": "configuring", 00:15:12.651 "raid_level": "raid5f", 00:15:12.651 "superblock": true, 00:15:12.651 "num_base_bdevs": 4, 00:15:12.651 "num_base_bdevs_discovered": 3, 00:15:12.651 "num_base_bdevs_operational": 4, 00:15:12.651 "base_bdevs_list": [ 00:15:12.651 { 00:15:12.651 "name": null, 00:15:12.651 "uuid": "874ada8c-de48-4aad-98ec-041f72a49143", 00:15:12.651 "is_configured": false, 00:15:12.651 "data_offset": 0, 00:15:12.651 "data_size": 63488 00:15:12.651 }, 00:15:12.651 { 00:15:12.651 "name": "BaseBdev2", 00:15:12.651 "uuid": "beef1c32-7750-47c1-bfe9-84f2bfdacd15", 00:15:12.651 "is_configured": true, 00:15:12.651 "data_offset": 2048, 00:15:12.651 "data_size": 63488 00:15:12.651 }, 00:15:12.651 { 00:15:12.651 "name": "BaseBdev3", 00:15:12.651 "uuid": "b992a089-6c70-4e25-aad1-e777cb70f9a2", 00:15:12.651 "is_configured": true, 00:15:12.651 "data_offset": 2048, 00:15:12.651 "data_size": 63488 00:15:12.651 }, 00:15:12.651 { 00:15:12.651 "name": "BaseBdev4", 00:15:12.651 "uuid": "28f05c4c-b68e-4390-b54c-2018e84f2e0c", 00:15:12.651 "is_configured": true, 00:15:12.651 "data_offset": 2048, 00:15:12.651 "data_size": 63488 00:15:12.651 } 00:15:12.651 ] 00:15:12.651 }' 00:15:12.651 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.651 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.911 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:12.911 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.911 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.911 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.911 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.912 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:12.912 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.912 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.912 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.912 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:12.912 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 874ada8c-de48-4aad-98ec-041f72a49143 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.171 [2024-11-21 04:13:12.903915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:13.171 [2024-11-21 04:13:12.904244] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:15:13.171 [2024-11-21 04:13:12.904297] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:13.171 [2024-11-21 04:13:12.904628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:15:13.171 NewBaseBdev 00:15:13.171 [2024-11-21 04:13:12.905176] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:15:13.171 [2024-11-21 04:13:12.905200] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:15:13.171 [2024-11-21 04:13:12.905321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.171 [ 00:15:13.171 { 00:15:13.171 "name": "NewBaseBdev", 00:15:13.171 "aliases": [ 00:15:13.171 "874ada8c-de48-4aad-98ec-041f72a49143" 00:15:13.171 ], 00:15:13.171 "product_name": "Malloc disk", 00:15:13.171 "block_size": 512, 00:15:13.171 "num_blocks": 65536, 00:15:13.171 "uuid": "874ada8c-de48-4aad-98ec-041f72a49143", 00:15:13.171 "assigned_rate_limits": { 00:15:13.171 "rw_ios_per_sec": 0, 00:15:13.171 "rw_mbytes_per_sec": 0, 00:15:13.171 "r_mbytes_per_sec": 0, 00:15:13.171 "w_mbytes_per_sec": 0 00:15:13.171 }, 00:15:13.171 "claimed": true, 00:15:13.171 "claim_type": "exclusive_write", 00:15:13.171 "zoned": false, 00:15:13.171 "supported_io_types": { 00:15:13.171 "read": true, 00:15:13.171 "write": true, 00:15:13.171 "unmap": true, 00:15:13.171 "flush": true, 00:15:13.171 "reset": true, 00:15:13.171 "nvme_admin": false, 00:15:13.171 "nvme_io": false, 00:15:13.171 "nvme_io_md": false, 00:15:13.171 "write_zeroes": true, 00:15:13.171 "zcopy": true, 00:15:13.171 "get_zone_info": false, 00:15:13.171 "zone_management": false, 00:15:13.171 "zone_append": false, 00:15:13.171 "compare": false, 00:15:13.171 "compare_and_write": false, 00:15:13.171 "abort": true, 00:15:13.171 "seek_hole": false, 00:15:13.171 "seek_data": false, 00:15:13.171 "copy": true, 00:15:13.171 "nvme_iov_md": false 00:15:13.171 }, 00:15:13.171 "memory_domains": [ 00:15:13.171 { 00:15:13.171 "dma_device_id": "system", 00:15:13.171 "dma_device_type": 1 00:15:13.171 }, 00:15:13.171 { 00:15:13.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.171 "dma_device_type": 2 00:15:13.171 } 00:15:13.171 ], 00:15:13.171 "driver_specific": {} 00:15:13.171 } 00:15:13.171 ] 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.171 "name": "Existed_Raid", 00:15:13.171 "uuid": "f3740b23-5e44-4036-8b0f-4791a5833b17", 00:15:13.171 "strip_size_kb": 64, 00:15:13.171 "state": "online", 00:15:13.171 "raid_level": "raid5f", 00:15:13.171 "superblock": true, 00:15:13.171 "num_base_bdevs": 4, 00:15:13.171 "num_base_bdevs_discovered": 4, 00:15:13.171 "num_base_bdevs_operational": 4, 00:15:13.171 "base_bdevs_list": [ 00:15:13.171 { 00:15:13.171 "name": "NewBaseBdev", 00:15:13.171 "uuid": "874ada8c-de48-4aad-98ec-041f72a49143", 00:15:13.171 "is_configured": true, 00:15:13.171 "data_offset": 2048, 00:15:13.171 "data_size": 63488 00:15:13.171 }, 00:15:13.171 { 00:15:13.171 "name": "BaseBdev2", 00:15:13.171 "uuid": "beef1c32-7750-47c1-bfe9-84f2bfdacd15", 00:15:13.171 "is_configured": true, 00:15:13.171 "data_offset": 2048, 00:15:13.171 "data_size": 63488 00:15:13.171 }, 00:15:13.171 { 00:15:13.171 "name": "BaseBdev3", 00:15:13.171 "uuid": "b992a089-6c70-4e25-aad1-e777cb70f9a2", 00:15:13.171 "is_configured": true, 00:15:13.171 "data_offset": 2048, 00:15:13.171 "data_size": 63488 00:15:13.171 }, 00:15:13.171 { 00:15:13.171 "name": "BaseBdev4", 00:15:13.171 "uuid": "28f05c4c-b68e-4390-b54c-2018e84f2e0c", 00:15:13.171 "is_configured": true, 00:15:13.171 "data_offset": 2048, 00:15:13.171 "data_size": 63488 00:15:13.171 } 00:15:13.171 ] 00:15:13.171 }' 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.171 04:13:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.741 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:13.741 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:13.741 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:13.741 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:13.741 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:13.741 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:13.741 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:13.741 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:13.741 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.741 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.741 [2024-11-21 04:13:13.423301] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:13.741 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.741 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:13.741 "name": "Existed_Raid", 00:15:13.741 "aliases": [ 00:15:13.741 "f3740b23-5e44-4036-8b0f-4791a5833b17" 00:15:13.741 ], 00:15:13.741 "product_name": "Raid Volume", 00:15:13.741 "block_size": 512, 00:15:13.741 "num_blocks": 190464, 00:15:13.741 "uuid": "f3740b23-5e44-4036-8b0f-4791a5833b17", 00:15:13.741 "assigned_rate_limits": { 00:15:13.741 "rw_ios_per_sec": 0, 00:15:13.741 "rw_mbytes_per_sec": 0, 00:15:13.741 "r_mbytes_per_sec": 0, 00:15:13.741 "w_mbytes_per_sec": 0 00:15:13.741 }, 00:15:13.741 "claimed": false, 00:15:13.741 "zoned": false, 00:15:13.741 "supported_io_types": { 00:15:13.741 "read": true, 00:15:13.741 "write": true, 00:15:13.741 "unmap": false, 00:15:13.741 "flush": false, 00:15:13.741 "reset": true, 00:15:13.741 "nvme_admin": false, 00:15:13.741 "nvme_io": false, 00:15:13.741 "nvme_io_md": false, 00:15:13.741 "write_zeroes": true, 00:15:13.741 "zcopy": false, 00:15:13.741 "get_zone_info": false, 00:15:13.741 "zone_management": false, 00:15:13.741 "zone_append": false, 00:15:13.741 "compare": false, 00:15:13.741 "compare_and_write": false, 00:15:13.741 "abort": false, 00:15:13.741 "seek_hole": false, 00:15:13.741 "seek_data": false, 00:15:13.741 "copy": false, 00:15:13.741 "nvme_iov_md": false 00:15:13.741 }, 00:15:13.741 "driver_specific": { 00:15:13.741 "raid": { 00:15:13.741 "uuid": "f3740b23-5e44-4036-8b0f-4791a5833b17", 00:15:13.741 "strip_size_kb": 64, 00:15:13.741 "state": "online", 00:15:13.741 "raid_level": "raid5f", 00:15:13.741 "superblock": true, 00:15:13.741 "num_base_bdevs": 4, 00:15:13.741 "num_base_bdevs_discovered": 4, 00:15:13.741 "num_base_bdevs_operational": 4, 00:15:13.741 "base_bdevs_list": [ 00:15:13.741 { 00:15:13.741 "name": "NewBaseBdev", 00:15:13.741 "uuid": "874ada8c-de48-4aad-98ec-041f72a49143", 00:15:13.741 "is_configured": true, 00:15:13.741 "data_offset": 2048, 00:15:13.741 "data_size": 63488 00:15:13.741 }, 00:15:13.741 { 00:15:13.741 "name": "BaseBdev2", 00:15:13.741 "uuid": "beef1c32-7750-47c1-bfe9-84f2bfdacd15", 00:15:13.741 "is_configured": true, 00:15:13.741 "data_offset": 2048, 00:15:13.741 "data_size": 63488 00:15:13.741 }, 00:15:13.741 { 00:15:13.741 "name": "BaseBdev3", 00:15:13.742 "uuid": "b992a089-6c70-4e25-aad1-e777cb70f9a2", 00:15:13.742 "is_configured": true, 00:15:13.742 "data_offset": 2048, 00:15:13.742 "data_size": 63488 00:15:13.742 }, 00:15:13.742 { 00:15:13.742 "name": "BaseBdev4", 00:15:13.742 "uuid": "28f05c4c-b68e-4390-b54c-2018e84f2e0c", 00:15:13.742 "is_configured": true, 00:15:13.742 "data_offset": 2048, 00:15:13.742 "data_size": 63488 00:15:13.742 } 00:15:13.742 ] 00:15:13.742 } 00:15:13.742 } 00:15:13.742 }' 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:13.742 BaseBdev2 00:15:13.742 BaseBdev3 00:15:13.742 BaseBdev4' 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.742 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.001 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.001 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:14.001 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:14.001 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:14.002 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.002 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.002 [2024-11-21 04:13:13.750561] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:14.002 [2024-11-21 04:13:13.750628] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:14.002 [2024-11-21 04:13:13.750730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.002 [2024-11-21 04:13:13.751066] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:14.002 [2024-11-21 04:13:13.751121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:15:14.002 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.002 04:13:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 93948 00:15:14.002 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 93948 ']' 00:15:14.002 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 93948 00:15:14.002 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:14.002 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:14.002 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93948 00:15:14.002 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:14.002 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:14.002 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93948' 00:15:14.002 killing process with pid 93948 00:15:14.002 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 93948 00:15:14.002 [2024-11-21 04:13:13.797947] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:14.002 04:13:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 93948 00:15:14.002 [2024-11-21 04:13:13.872892] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:14.262 04:13:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:14.262 00:15:14.262 real 0m9.830s 00:15:14.262 user 0m16.458s 00:15:14.262 sys 0m2.261s 00:15:14.262 ************************************ 00:15:14.262 END TEST raid5f_state_function_test_sb 00:15:14.262 ************************************ 00:15:14.262 04:13:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:14.262 04:13:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.522 04:13:14 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:15:14.522 04:13:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:14.522 04:13:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:14.522 04:13:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:14.522 ************************************ 00:15:14.522 START TEST raid5f_superblock_test 00:15:14.522 ************************************ 00:15:14.522 04:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:15:14.522 04:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:14.522 04:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:14.522 04:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:14.522 04:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:14.522 04:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:14.522 04:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:14.522 04:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:14.522 04:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:14.522 04:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:14.522 04:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:14.522 04:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:14.522 04:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:14.522 04:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:14.522 04:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:14.522 04:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:14.522 04:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:14.522 04:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=94602 00:15:14.522 04:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:14.522 04:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 94602 00:15:14.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.522 04:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 94602 ']' 00:15:14.523 04:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.523 04:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:14.523 04:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.523 04:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:14.523 04:13:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.523 [2024-11-21 04:13:14.377456] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:15:14.523 [2024-11-21 04:13:14.377684] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94602 ] 00:15:14.783 [2024-11-21 04:13:14.535113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.783 [2024-11-21 04:13:14.575621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.783 [2024-11-21 04:13:14.652684] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.783 [2024-11-21 04:13:14.652823] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.354 malloc1 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.354 [2024-11-21 04:13:15.211662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:15.354 [2024-11-21 04:13:15.211812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.354 [2024-11-21 04:13:15.211865] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:15.354 [2024-11-21 04:13:15.211908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.354 [2024-11-21 04:13:15.214444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.354 [2024-11-21 04:13:15.214483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:15.354 pt1 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.354 malloc2 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.354 [2024-11-21 04:13:15.246283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:15.354 [2024-11-21 04:13:15.246395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.354 [2024-11-21 04:13:15.246427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:15.354 [2024-11-21 04:13:15.246455] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.354 [2024-11-21 04:13:15.248833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.354 [2024-11-21 04:13:15.248924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:15.354 pt2 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.354 malloc3 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.354 [2024-11-21 04:13:15.284938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:15.354 [2024-11-21 04:13:15.285047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.354 [2024-11-21 04:13:15.285084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:15.354 [2024-11-21 04:13:15.285114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.354 [2024-11-21 04:13:15.287541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.354 [2024-11-21 04:13:15.287613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:15.354 pt3 00:15:15.354 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.355 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:15.355 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:15.355 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:15.355 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:15.355 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:15.355 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:15.355 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:15.355 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:15.355 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:15.355 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.355 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.615 malloc4 00:15:15.615 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.615 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:15.615 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.615 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.615 [2024-11-21 04:13:15.338730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:15.615 [2024-11-21 04:13:15.338912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.615 [2024-11-21 04:13:15.338983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:15.615 [2024-11-21 04:13:15.339059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.615 [2024-11-21 04:13:15.343090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.615 [2024-11-21 04:13:15.343191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:15.615 pt4 00:15:15.615 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.615 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:15.615 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:15.615 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:15.615 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.615 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.615 [2024-11-21 04:13:15.351468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:15.615 [2024-11-21 04:13:15.353843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:15.615 [2024-11-21 04:13:15.353959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:15.615 [2024-11-21 04:13:15.354029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:15.615 [2024-11-21 04:13:15.354280] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:15.615 [2024-11-21 04:13:15.354332] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:15.615 [2024-11-21 04:13:15.354634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:15.615 [2024-11-21 04:13:15.355228] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:15.615 [2024-11-21 04:13:15.355290] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:15.615 [2024-11-21 04:13:15.355542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.615 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.615 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:15.615 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.615 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.615 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.615 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.615 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:15.615 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.615 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.615 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.615 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.615 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.615 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.615 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.615 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.615 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.616 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.616 "name": "raid_bdev1", 00:15:15.616 "uuid": "9ee37413-c3aa-4392-8d6d-b53f610e718e", 00:15:15.616 "strip_size_kb": 64, 00:15:15.616 "state": "online", 00:15:15.616 "raid_level": "raid5f", 00:15:15.616 "superblock": true, 00:15:15.616 "num_base_bdevs": 4, 00:15:15.616 "num_base_bdevs_discovered": 4, 00:15:15.616 "num_base_bdevs_operational": 4, 00:15:15.616 "base_bdevs_list": [ 00:15:15.616 { 00:15:15.616 "name": "pt1", 00:15:15.616 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:15.616 "is_configured": true, 00:15:15.616 "data_offset": 2048, 00:15:15.616 "data_size": 63488 00:15:15.616 }, 00:15:15.616 { 00:15:15.616 "name": "pt2", 00:15:15.616 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:15.616 "is_configured": true, 00:15:15.616 "data_offset": 2048, 00:15:15.616 "data_size": 63488 00:15:15.616 }, 00:15:15.616 { 00:15:15.616 "name": "pt3", 00:15:15.616 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:15.616 "is_configured": true, 00:15:15.616 "data_offset": 2048, 00:15:15.616 "data_size": 63488 00:15:15.616 }, 00:15:15.616 { 00:15:15.616 "name": "pt4", 00:15:15.616 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:15.616 "is_configured": true, 00:15:15.616 "data_offset": 2048, 00:15:15.616 "data_size": 63488 00:15:15.616 } 00:15:15.616 ] 00:15:15.616 }' 00:15:15.616 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.616 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.876 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:15.876 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:15.876 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:15.876 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:15.876 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:15.876 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:15.876 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:15.876 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.876 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:15.876 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.876 [2024-11-21 04:13:15.818890] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:15.876 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.136 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:16.136 "name": "raid_bdev1", 00:15:16.136 "aliases": [ 00:15:16.136 "9ee37413-c3aa-4392-8d6d-b53f610e718e" 00:15:16.136 ], 00:15:16.136 "product_name": "Raid Volume", 00:15:16.136 "block_size": 512, 00:15:16.136 "num_blocks": 190464, 00:15:16.136 "uuid": "9ee37413-c3aa-4392-8d6d-b53f610e718e", 00:15:16.136 "assigned_rate_limits": { 00:15:16.136 "rw_ios_per_sec": 0, 00:15:16.136 "rw_mbytes_per_sec": 0, 00:15:16.136 "r_mbytes_per_sec": 0, 00:15:16.136 "w_mbytes_per_sec": 0 00:15:16.136 }, 00:15:16.136 "claimed": false, 00:15:16.137 "zoned": false, 00:15:16.137 "supported_io_types": { 00:15:16.137 "read": true, 00:15:16.137 "write": true, 00:15:16.137 "unmap": false, 00:15:16.137 "flush": false, 00:15:16.137 "reset": true, 00:15:16.137 "nvme_admin": false, 00:15:16.137 "nvme_io": false, 00:15:16.137 "nvme_io_md": false, 00:15:16.137 "write_zeroes": true, 00:15:16.137 "zcopy": false, 00:15:16.137 "get_zone_info": false, 00:15:16.137 "zone_management": false, 00:15:16.137 "zone_append": false, 00:15:16.137 "compare": false, 00:15:16.137 "compare_and_write": false, 00:15:16.137 "abort": false, 00:15:16.137 "seek_hole": false, 00:15:16.137 "seek_data": false, 00:15:16.137 "copy": false, 00:15:16.137 "nvme_iov_md": false 00:15:16.137 }, 00:15:16.137 "driver_specific": { 00:15:16.137 "raid": { 00:15:16.137 "uuid": "9ee37413-c3aa-4392-8d6d-b53f610e718e", 00:15:16.137 "strip_size_kb": 64, 00:15:16.137 "state": "online", 00:15:16.137 "raid_level": "raid5f", 00:15:16.137 "superblock": true, 00:15:16.137 "num_base_bdevs": 4, 00:15:16.137 "num_base_bdevs_discovered": 4, 00:15:16.137 "num_base_bdevs_operational": 4, 00:15:16.137 "base_bdevs_list": [ 00:15:16.137 { 00:15:16.137 "name": "pt1", 00:15:16.137 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:16.137 "is_configured": true, 00:15:16.137 "data_offset": 2048, 00:15:16.137 "data_size": 63488 00:15:16.137 }, 00:15:16.137 { 00:15:16.137 "name": "pt2", 00:15:16.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:16.137 "is_configured": true, 00:15:16.137 "data_offset": 2048, 00:15:16.137 "data_size": 63488 00:15:16.137 }, 00:15:16.137 { 00:15:16.137 "name": "pt3", 00:15:16.137 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:16.137 "is_configured": true, 00:15:16.137 "data_offset": 2048, 00:15:16.137 "data_size": 63488 00:15:16.137 }, 00:15:16.137 { 00:15:16.137 "name": "pt4", 00:15:16.137 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:16.137 "is_configured": true, 00:15:16.137 "data_offset": 2048, 00:15:16.137 "data_size": 63488 00:15:16.137 } 00:15:16.137 ] 00:15:16.137 } 00:15:16.137 } 00:15:16.137 }' 00:15:16.137 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:16.137 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:16.137 pt2 00:15:16.137 pt3 00:15:16.137 pt4' 00:15:16.137 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.137 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:16.137 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.137 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.137 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:16.137 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.137 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.137 04:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.137 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:16.137 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:16.137 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.137 04:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.137 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:16.137 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.137 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.137 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.137 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:16.137 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:16.137 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.137 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:16.137 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.137 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.137 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.137 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.137 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:16.137 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:16.137 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.137 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.137 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:16.137 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.137 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.398 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.398 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:16.398 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:16.398 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:16.398 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:16.398 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.398 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.398 [2024-11-21 04:13:16.146329] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:16.398 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.398 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9ee37413-c3aa-4392-8d6d-b53f610e718e 00:15:16.398 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9ee37413-c3aa-4392-8d6d-b53f610e718e ']' 00:15:16.398 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:16.398 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.398 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.398 [2024-11-21 04:13:16.190063] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:16.398 [2024-11-21 04:13:16.190131] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:16.398 [2024-11-21 04:13:16.190248] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.398 [2024-11-21 04:13:16.190340] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.398 [2024-11-21 04:13:16.190362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:16.398 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.399 [2024-11-21 04:13:16.349817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:16.399 [2024-11-21 04:13:16.351968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:16.399 [2024-11-21 04:13:16.352054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:16.399 [2024-11-21 04:13:16.352098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:16.399 [2024-11-21 04:13:16.352180] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:16.399 [2024-11-21 04:13:16.352262] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:16.399 [2024-11-21 04:13:16.352336] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:16.399 [2024-11-21 04:13:16.352397] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:16.399 [2024-11-21 04:13:16.352453] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:16.399 [2024-11-21 04:13:16.352494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:15:16.399 request: 00:15:16.399 { 00:15:16.399 "name": "raid_bdev1", 00:15:16.399 "raid_level": "raid5f", 00:15:16.399 "base_bdevs": [ 00:15:16.399 "malloc1", 00:15:16.399 "malloc2", 00:15:16.399 "malloc3", 00:15:16.399 "malloc4" 00:15:16.399 ], 00:15:16.399 "strip_size_kb": 64, 00:15:16.399 "superblock": false, 00:15:16.399 "method": "bdev_raid_create", 00:15:16.399 "req_id": 1 00:15:16.399 } 00:15:16.399 Got JSON-RPC error response 00:15:16.399 response: 00:15:16.399 { 00:15:16.399 "code": -17, 00:15:16.399 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:16.399 } 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.399 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.660 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.660 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:16.660 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:16.660 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:16.660 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.660 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.660 [2024-11-21 04:13:16.397705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:16.660 [2024-11-21 04:13:16.397801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.660 [2024-11-21 04:13:16.397838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:16.660 [2024-11-21 04:13:16.397864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.660 [2024-11-21 04:13:16.400324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.660 [2024-11-21 04:13:16.400389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:16.660 [2024-11-21 04:13:16.400470] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:16.660 [2024-11-21 04:13:16.400549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:16.660 pt1 00:15:16.660 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.660 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:16.660 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.660 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.660 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.660 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.660 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:16.660 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.660 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.660 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.660 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.660 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.660 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.660 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.660 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.660 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.660 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.660 "name": "raid_bdev1", 00:15:16.660 "uuid": "9ee37413-c3aa-4392-8d6d-b53f610e718e", 00:15:16.660 "strip_size_kb": 64, 00:15:16.660 "state": "configuring", 00:15:16.660 "raid_level": "raid5f", 00:15:16.660 "superblock": true, 00:15:16.660 "num_base_bdevs": 4, 00:15:16.660 "num_base_bdevs_discovered": 1, 00:15:16.660 "num_base_bdevs_operational": 4, 00:15:16.660 "base_bdevs_list": [ 00:15:16.660 { 00:15:16.660 "name": "pt1", 00:15:16.660 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:16.660 "is_configured": true, 00:15:16.660 "data_offset": 2048, 00:15:16.660 "data_size": 63488 00:15:16.660 }, 00:15:16.660 { 00:15:16.660 "name": null, 00:15:16.660 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:16.660 "is_configured": false, 00:15:16.660 "data_offset": 2048, 00:15:16.660 "data_size": 63488 00:15:16.660 }, 00:15:16.660 { 00:15:16.660 "name": null, 00:15:16.660 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:16.660 "is_configured": false, 00:15:16.660 "data_offset": 2048, 00:15:16.660 "data_size": 63488 00:15:16.660 }, 00:15:16.660 { 00:15:16.660 "name": null, 00:15:16.660 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:16.660 "is_configured": false, 00:15:16.660 "data_offset": 2048, 00:15:16.660 "data_size": 63488 00:15:16.660 } 00:15:16.660 ] 00:15:16.660 }' 00:15:16.660 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.660 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.921 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:16.921 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:16.921 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.921 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.921 [2024-11-21 04:13:16.836964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:16.921 [2024-11-21 04:13:16.837051] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.921 [2024-11-21 04:13:16.837071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:16.921 [2024-11-21 04:13:16.837080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.921 [2024-11-21 04:13:16.837459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.921 [2024-11-21 04:13:16.837477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:16.921 [2024-11-21 04:13:16.837534] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:16.921 [2024-11-21 04:13:16.837553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:16.921 pt2 00:15:16.921 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.921 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:16.921 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.921 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.921 [2024-11-21 04:13:16.848977] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:16.921 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.921 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:16.921 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.921 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.921 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.921 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.921 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:16.921 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.921 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.921 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.921 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.921 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.921 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.921 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.921 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.921 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.182 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.182 "name": "raid_bdev1", 00:15:17.182 "uuid": "9ee37413-c3aa-4392-8d6d-b53f610e718e", 00:15:17.182 "strip_size_kb": 64, 00:15:17.182 "state": "configuring", 00:15:17.182 "raid_level": "raid5f", 00:15:17.182 "superblock": true, 00:15:17.182 "num_base_bdevs": 4, 00:15:17.182 "num_base_bdevs_discovered": 1, 00:15:17.182 "num_base_bdevs_operational": 4, 00:15:17.182 "base_bdevs_list": [ 00:15:17.182 { 00:15:17.182 "name": "pt1", 00:15:17.182 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:17.182 "is_configured": true, 00:15:17.182 "data_offset": 2048, 00:15:17.182 "data_size": 63488 00:15:17.182 }, 00:15:17.182 { 00:15:17.182 "name": null, 00:15:17.182 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:17.182 "is_configured": false, 00:15:17.182 "data_offset": 0, 00:15:17.182 "data_size": 63488 00:15:17.182 }, 00:15:17.182 { 00:15:17.182 "name": null, 00:15:17.182 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:17.182 "is_configured": false, 00:15:17.182 "data_offset": 2048, 00:15:17.182 "data_size": 63488 00:15:17.182 }, 00:15:17.182 { 00:15:17.182 "name": null, 00:15:17.182 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:17.182 "is_configured": false, 00:15:17.182 "data_offset": 2048, 00:15:17.182 "data_size": 63488 00:15:17.182 } 00:15:17.182 ] 00:15:17.182 }' 00:15:17.182 04:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.182 04:13:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.443 [2024-11-21 04:13:17.344319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:17.443 [2024-11-21 04:13:17.344436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.443 [2024-11-21 04:13:17.344459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:17.443 [2024-11-21 04:13:17.344471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.443 [2024-11-21 04:13:17.344820] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.443 [2024-11-21 04:13:17.344840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:17.443 [2024-11-21 04:13:17.344899] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:17.443 [2024-11-21 04:13:17.344922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:17.443 pt2 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.443 [2024-11-21 04:13:17.356308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:17.443 [2024-11-21 04:13:17.356399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.443 [2024-11-21 04:13:17.356431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:17.443 [2024-11-21 04:13:17.356460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.443 [2024-11-21 04:13:17.356883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.443 [2024-11-21 04:13:17.356941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:17.443 [2024-11-21 04:13:17.357032] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:17.443 [2024-11-21 04:13:17.357081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:17.443 pt3 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.443 [2024-11-21 04:13:17.368292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:17.443 [2024-11-21 04:13:17.368393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.443 [2024-11-21 04:13:17.368423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:17.443 [2024-11-21 04:13:17.368450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.443 [2024-11-21 04:13:17.368765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.443 [2024-11-21 04:13:17.368821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:17.443 [2024-11-21 04:13:17.368884] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:17.443 [2024-11-21 04:13:17.368905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:17.443 [2024-11-21 04:13:17.369019] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:17.443 [2024-11-21 04:13:17.369035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:17.443 [2024-11-21 04:13:17.369282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:17.443 [2024-11-21 04:13:17.369749] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:17.443 [2024-11-21 04:13:17.369760] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:15:17.443 [2024-11-21 04:13:17.369854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.443 pt4 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.443 04:13:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.704 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.704 "name": "raid_bdev1", 00:15:17.704 "uuid": "9ee37413-c3aa-4392-8d6d-b53f610e718e", 00:15:17.704 "strip_size_kb": 64, 00:15:17.704 "state": "online", 00:15:17.704 "raid_level": "raid5f", 00:15:17.704 "superblock": true, 00:15:17.704 "num_base_bdevs": 4, 00:15:17.704 "num_base_bdevs_discovered": 4, 00:15:17.704 "num_base_bdevs_operational": 4, 00:15:17.704 "base_bdevs_list": [ 00:15:17.704 { 00:15:17.704 "name": "pt1", 00:15:17.704 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:17.704 "is_configured": true, 00:15:17.704 "data_offset": 2048, 00:15:17.704 "data_size": 63488 00:15:17.704 }, 00:15:17.704 { 00:15:17.704 "name": "pt2", 00:15:17.704 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:17.704 "is_configured": true, 00:15:17.704 "data_offset": 2048, 00:15:17.704 "data_size": 63488 00:15:17.704 }, 00:15:17.704 { 00:15:17.704 "name": "pt3", 00:15:17.704 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:17.704 "is_configured": true, 00:15:17.704 "data_offset": 2048, 00:15:17.704 "data_size": 63488 00:15:17.704 }, 00:15:17.704 { 00:15:17.704 "name": "pt4", 00:15:17.704 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:17.704 "is_configured": true, 00:15:17.704 "data_offset": 2048, 00:15:17.704 "data_size": 63488 00:15:17.704 } 00:15:17.704 ] 00:15:17.704 }' 00:15:17.704 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.704 04:13:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.964 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:17.964 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:17.964 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:17.964 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:17.964 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:17.964 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:17.964 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:17.964 04:13:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.964 04:13:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.964 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:17.964 [2024-11-21 04:13:17.804045] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:17.964 04:13:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.964 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:17.964 "name": "raid_bdev1", 00:15:17.964 "aliases": [ 00:15:17.964 "9ee37413-c3aa-4392-8d6d-b53f610e718e" 00:15:17.964 ], 00:15:17.964 "product_name": "Raid Volume", 00:15:17.964 "block_size": 512, 00:15:17.964 "num_blocks": 190464, 00:15:17.964 "uuid": "9ee37413-c3aa-4392-8d6d-b53f610e718e", 00:15:17.964 "assigned_rate_limits": { 00:15:17.964 "rw_ios_per_sec": 0, 00:15:17.964 "rw_mbytes_per_sec": 0, 00:15:17.964 "r_mbytes_per_sec": 0, 00:15:17.964 "w_mbytes_per_sec": 0 00:15:17.964 }, 00:15:17.964 "claimed": false, 00:15:17.964 "zoned": false, 00:15:17.964 "supported_io_types": { 00:15:17.964 "read": true, 00:15:17.964 "write": true, 00:15:17.964 "unmap": false, 00:15:17.964 "flush": false, 00:15:17.964 "reset": true, 00:15:17.964 "nvme_admin": false, 00:15:17.964 "nvme_io": false, 00:15:17.964 "nvme_io_md": false, 00:15:17.964 "write_zeroes": true, 00:15:17.964 "zcopy": false, 00:15:17.964 "get_zone_info": false, 00:15:17.964 "zone_management": false, 00:15:17.964 "zone_append": false, 00:15:17.964 "compare": false, 00:15:17.964 "compare_and_write": false, 00:15:17.964 "abort": false, 00:15:17.964 "seek_hole": false, 00:15:17.964 "seek_data": false, 00:15:17.964 "copy": false, 00:15:17.964 "nvme_iov_md": false 00:15:17.964 }, 00:15:17.964 "driver_specific": { 00:15:17.964 "raid": { 00:15:17.964 "uuid": "9ee37413-c3aa-4392-8d6d-b53f610e718e", 00:15:17.964 "strip_size_kb": 64, 00:15:17.964 "state": "online", 00:15:17.964 "raid_level": "raid5f", 00:15:17.964 "superblock": true, 00:15:17.964 "num_base_bdevs": 4, 00:15:17.964 "num_base_bdevs_discovered": 4, 00:15:17.964 "num_base_bdevs_operational": 4, 00:15:17.964 "base_bdevs_list": [ 00:15:17.964 { 00:15:17.964 "name": "pt1", 00:15:17.964 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:17.964 "is_configured": true, 00:15:17.964 "data_offset": 2048, 00:15:17.964 "data_size": 63488 00:15:17.964 }, 00:15:17.964 { 00:15:17.964 "name": "pt2", 00:15:17.964 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:17.964 "is_configured": true, 00:15:17.964 "data_offset": 2048, 00:15:17.964 "data_size": 63488 00:15:17.964 }, 00:15:17.964 { 00:15:17.964 "name": "pt3", 00:15:17.964 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:17.964 "is_configured": true, 00:15:17.964 "data_offset": 2048, 00:15:17.964 "data_size": 63488 00:15:17.964 }, 00:15:17.964 { 00:15:17.964 "name": "pt4", 00:15:17.964 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:17.964 "is_configured": true, 00:15:17.964 "data_offset": 2048, 00:15:17.964 "data_size": 63488 00:15:17.964 } 00:15:17.964 ] 00:15:17.964 } 00:15:17.964 } 00:15:17.964 }' 00:15:17.964 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:17.964 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:17.964 pt2 00:15:17.964 pt3 00:15:17.964 pt4' 00:15:17.964 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.964 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:17.964 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.964 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.964 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:17.965 04:13:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.965 04:13:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.225 04:13:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.225 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:18.225 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:18.225 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:18.225 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:18.225 04:13:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.225 04:13:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.225 04:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.225 04:13:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:18.225 [2024-11-21 04:13:18.119492] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9ee37413-c3aa-4392-8d6d-b53f610e718e '!=' 9ee37413-c3aa-4392-8d6d-b53f610e718e ']' 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.225 [2024-11-21 04:13:18.167312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.225 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.486 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.486 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.486 "name": "raid_bdev1", 00:15:18.486 "uuid": "9ee37413-c3aa-4392-8d6d-b53f610e718e", 00:15:18.486 "strip_size_kb": 64, 00:15:18.486 "state": "online", 00:15:18.486 "raid_level": "raid5f", 00:15:18.486 "superblock": true, 00:15:18.486 "num_base_bdevs": 4, 00:15:18.486 "num_base_bdevs_discovered": 3, 00:15:18.486 "num_base_bdevs_operational": 3, 00:15:18.486 "base_bdevs_list": [ 00:15:18.486 { 00:15:18.486 "name": null, 00:15:18.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.486 "is_configured": false, 00:15:18.486 "data_offset": 0, 00:15:18.486 "data_size": 63488 00:15:18.486 }, 00:15:18.486 { 00:15:18.486 "name": "pt2", 00:15:18.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:18.486 "is_configured": true, 00:15:18.486 "data_offset": 2048, 00:15:18.486 "data_size": 63488 00:15:18.486 }, 00:15:18.486 { 00:15:18.486 "name": "pt3", 00:15:18.486 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:18.486 "is_configured": true, 00:15:18.486 "data_offset": 2048, 00:15:18.486 "data_size": 63488 00:15:18.486 }, 00:15:18.486 { 00:15:18.486 "name": "pt4", 00:15:18.486 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:18.486 "is_configured": true, 00:15:18.486 "data_offset": 2048, 00:15:18.486 "data_size": 63488 00:15:18.486 } 00:15:18.486 ] 00:15:18.486 }' 00:15:18.486 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.486 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.746 [2024-11-21 04:13:18.586538] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:18.746 [2024-11-21 04:13:18.586602] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.746 [2024-11-21 04:13:18.586680] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.746 [2024-11-21 04:13:18.586795] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.746 [2024-11-21 04:13:18.586851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.746 [2024-11-21 04:13:18.678369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:18.746 [2024-11-21 04:13:18.678468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.746 [2024-11-21 04:13:18.678507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:18.746 [2024-11-21 04:13:18.678534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.746 [2024-11-21 04:13:18.680856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.746 [2024-11-21 04:13:18.680928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:18.746 [2024-11-21 04:13:18.681007] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:18.746 [2024-11-21 04:13:18.681067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:18.746 pt2 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.746 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.007 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.007 "name": "raid_bdev1", 00:15:19.007 "uuid": "9ee37413-c3aa-4392-8d6d-b53f610e718e", 00:15:19.007 "strip_size_kb": 64, 00:15:19.007 "state": "configuring", 00:15:19.007 "raid_level": "raid5f", 00:15:19.007 "superblock": true, 00:15:19.007 "num_base_bdevs": 4, 00:15:19.007 "num_base_bdevs_discovered": 1, 00:15:19.007 "num_base_bdevs_operational": 3, 00:15:19.007 "base_bdevs_list": [ 00:15:19.007 { 00:15:19.007 "name": null, 00:15:19.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.007 "is_configured": false, 00:15:19.007 "data_offset": 2048, 00:15:19.007 "data_size": 63488 00:15:19.007 }, 00:15:19.007 { 00:15:19.007 "name": "pt2", 00:15:19.007 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:19.007 "is_configured": true, 00:15:19.007 "data_offset": 2048, 00:15:19.007 "data_size": 63488 00:15:19.007 }, 00:15:19.007 { 00:15:19.007 "name": null, 00:15:19.007 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:19.007 "is_configured": false, 00:15:19.007 "data_offset": 2048, 00:15:19.007 "data_size": 63488 00:15:19.007 }, 00:15:19.007 { 00:15:19.007 "name": null, 00:15:19.007 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:19.007 "is_configured": false, 00:15:19.007 "data_offset": 2048, 00:15:19.007 "data_size": 63488 00:15:19.007 } 00:15:19.007 ] 00:15:19.007 }' 00:15:19.007 04:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.007 04:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.268 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:19.268 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:19.268 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:19.268 04:13:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.268 04:13:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.268 [2024-11-21 04:13:19.125672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:19.268 [2024-11-21 04:13:19.125780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.268 [2024-11-21 04:13:19.125813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:19.268 [2024-11-21 04:13:19.125843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.268 [2024-11-21 04:13:19.126199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.268 [2024-11-21 04:13:19.126291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:19.268 [2024-11-21 04:13:19.126388] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:19.268 [2024-11-21 04:13:19.126441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:19.268 pt3 00:15:19.268 04:13:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.268 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:19.268 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.268 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.268 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.268 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.268 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.268 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.268 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.268 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.268 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.268 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.268 04:13:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.268 04:13:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.268 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.268 04:13:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.268 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.268 "name": "raid_bdev1", 00:15:19.268 "uuid": "9ee37413-c3aa-4392-8d6d-b53f610e718e", 00:15:19.268 "strip_size_kb": 64, 00:15:19.268 "state": "configuring", 00:15:19.268 "raid_level": "raid5f", 00:15:19.268 "superblock": true, 00:15:19.268 "num_base_bdevs": 4, 00:15:19.268 "num_base_bdevs_discovered": 2, 00:15:19.268 "num_base_bdevs_operational": 3, 00:15:19.268 "base_bdevs_list": [ 00:15:19.268 { 00:15:19.268 "name": null, 00:15:19.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.268 "is_configured": false, 00:15:19.268 "data_offset": 2048, 00:15:19.268 "data_size": 63488 00:15:19.268 }, 00:15:19.268 { 00:15:19.268 "name": "pt2", 00:15:19.268 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:19.268 "is_configured": true, 00:15:19.268 "data_offset": 2048, 00:15:19.268 "data_size": 63488 00:15:19.268 }, 00:15:19.268 { 00:15:19.268 "name": "pt3", 00:15:19.268 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:19.268 "is_configured": true, 00:15:19.268 "data_offset": 2048, 00:15:19.268 "data_size": 63488 00:15:19.268 }, 00:15:19.268 { 00:15:19.268 "name": null, 00:15:19.268 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:19.268 "is_configured": false, 00:15:19.268 "data_offset": 2048, 00:15:19.268 "data_size": 63488 00:15:19.268 } 00:15:19.268 ] 00:15:19.268 }' 00:15:19.268 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.268 04:13:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.838 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:19.838 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:19.838 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:19.838 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:19.838 04:13:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.838 04:13:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.838 [2024-11-21 04:13:19.612855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:19.838 [2024-11-21 04:13:19.612946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.838 [2024-11-21 04:13:19.612979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:19.838 [2024-11-21 04:13:19.613008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.838 [2024-11-21 04:13:19.613429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.838 [2024-11-21 04:13:19.613491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:19.838 [2024-11-21 04:13:19.613585] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:19.838 [2024-11-21 04:13:19.613634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:19.838 [2024-11-21 04:13:19.613766] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:15:19.838 [2024-11-21 04:13:19.613804] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:19.838 [2024-11-21 04:13:19.614084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:15:19.838 [2024-11-21 04:13:19.614697] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:15:19.838 [2024-11-21 04:13:19.614716] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:15:19.838 [2024-11-21 04:13:19.614942] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.838 pt4 00:15:19.838 04:13:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.838 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:19.838 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.838 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.838 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.838 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.838 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.838 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.838 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.838 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.838 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.838 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.838 04:13:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.838 04:13:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.838 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.838 04:13:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.838 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.838 "name": "raid_bdev1", 00:15:19.838 "uuid": "9ee37413-c3aa-4392-8d6d-b53f610e718e", 00:15:19.838 "strip_size_kb": 64, 00:15:19.838 "state": "online", 00:15:19.838 "raid_level": "raid5f", 00:15:19.838 "superblock": true, 00:15:19.838 "num_base_bdevs": 4, 00:15:19.839 "num_base_bdevs_discovered": 3, 00:15:19.839 "num_base_bdevs_operational": 3, 00:15:19.839 "base_bdevs_list": [ 00:15:19.839 { 00:15:19.839 "name": null, 00:15:19.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.839 "is_configured": false, 00:15:19.839 "data_offset": 2048, 00:15:19.839 "data_size": 63488 00:15:19.839 }, 00:15:19.839 { 00:15:19.839 "name": "pt2", 00:15:19.839 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:19.839 "is_configured": true, 00:15:19.839 "data_offset": 2048, 00:15:19.839 "data_size": 63488 00:15:19.839 }, 00:15:19.839 { 00:15:19.839 "name": "pt3", 00:15:19.839 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:19.839 "is_configured": true, 00:15:19.839 "data_offset": 2048, 00:15:19.839 "data_size": 63488 00:15:19.839 }, 00:15:19.839 { 00:15:19.839 "name": "pt4", 00:15:19.839 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:19.839 "is_configured": true, 00:15:19.839 "data_offset": 2048, 00:15:19.839 "data_size": 63488 00:15:19.839 } 00:15:19.839 ] 00:15:19.839 }' 00:15:19.839 04:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.839 04:13:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.099 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:20.099 04:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.099 04:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.099 [2024-11-21 04:13:20.064857] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:20.099 [2024-11-21 04:13:20.064929] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:20.099 [2024-11-21 04:13:20.065003] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.099 [2024-11-21 04:13:20.065110] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:20.099 [2024-11-21 04:13:20.065164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:15:20.099 04:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.360 [2024-11-21 04:13:20.136747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:20.360 [2024-11-21 04:13:20.136847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.360 [2024-11-21 04:13:20.136878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:20.360 [2024-11-21 04:13:20.136905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.360 [2024-11-21 04:13:20.139335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.360 [2024-11-21 04:13:20.139401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:20.360 [2024-11-21 04:13:20.139497] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:20.360 [2024-11-21 04:13:20.139560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:20.360 [2024-11-21 04:13:20.139698] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:20.360 [2024-11-21 04:13:20.139751] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:20.360 [2024-11-21 04:13:20.139825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, stapt1 00:15:20.360 te configuring 00:15:20.360 [2024-11-21 04:13:20.139890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:20.360 [2024-11-21 04:13:20.140016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.360 "name": "raid_bdev1", 00:15:20.360 "uuid": "9ee37413-c3aa-4392-8d6d-b53f610e718e", 00:15:20.360 "strip_size_kb": 64, 00:15:20.360 "state": "configuring", 00:15:20.360 "raid_level": "raid5f", 00:15:20.360 "superblock": true, 00:15:20.360 "num_base_bdevs": 4, 00:15:20.360 "num_base_bdevs_discovered": 2, 00:15:20.360 "num_base_bdevs_operational": 3, 00:15:20.360 "base_bdevs_list": [ 00:15:20.360 { 00:15:20.360 "name": null, 00:15:20.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.360 "is_configured": false, 00:15:20.360 "data_offset": 2048, 00:15:20.360 "data_size": 63488 00:15:20.360 }, 00:15:20.360 { 00:15:20.360 "name": "pt2", 00:15:20.360 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:20.360 "is_configured": true, 00:15:20.360 "data_offset": 2048, 00:15:20.360 "data_size": 63488 00:15:20.360 }, 00:15:20.360 { 00:15:20.360 "name": "pt3", 00:15:20.360 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:20.360 "is_configured": true, 00:15:20.360 "data_offset": 2048, 00:15:20.360 "data_size": 63488 00:15:20.360 }, 00:15:20.360 { 00:15:20.360 "name": null, 00:15:20.360 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:20.360 "is_configured": false, 00:15:20.360 "data_offset": 2048, 00:15:20.360 "data_size": 63488 00:15:20.360 } 00:15:20.360 ] 00:15:20.360 }' 00:15:20.360 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.361 04:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.931 [2024-11-21 04:13:20.632343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:20.931 [2024-11-21 04:13:20.632443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.931 [2024-11-21 04:13:20.632509] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:20.931 [2024-11-21 04:13:20.632551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.931 [2024-11-21 04:13:20.632945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.931 [2024-11-21 04:13:20.633008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:20.931 [2024-11-21 04:13:20.633109] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:20.931 [2024-11-21 04:13:20.633147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:20.931 [2024-11-21 04:13:20.633265] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:15:20.931 [2024-11-21 04:13:20.633278] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:20.931 [2024-11-21 04:13:20.633519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:15:20.931 [2024-11-21 04:13:20.634079] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:15:20.931 [2024-11-21 04:13:20.634100] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:15:20.931 [2024-11-21 04:13:20.634311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.931 pt4 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.931 "name": "raid_bdev1", 00:15:20.931 "uuid": "9ee37413-c3aa-4392-8d6d-b53f610e718e", 00:15:20.931 "strip_size_kb": 64, 00:15:20.931 "state": "online", 00:15:20.931 "raid_level": "raid5f", 00:15:20.931 "superblock": true, 00:15:20.931 "num_base_bdevs": 4, 00:15:20.931 "num_base_bdevs_discovered": 3, 00:15:20.931 "num_base_bdevs_operational": 3, 00:15:20.931 "base_bdevs_list": [ 00:15:20.931 { 00:15:20.931 "name": null, 00:15:20.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.931 "is_configured": false, 00:15:20.931 "data_offset": 2048, 00:15:20.931 "data_size": 63488 00:15:20.931 }, 00:15:20.931 { 00:15:20.931 "name": "pt2", 00:15:20.931 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:20.931 "is_configured": true, 00:15:20.931 "data_offset": 2048, 00:15:20.931 "data_size": 63488 00:15:20.931 }, 00:15:20.931 { 00:15:20.931 "name": "pt3", 00:15:20.931 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:20.931 "is_configured": true, 00:15:20.931 "data_offset": 2048, 00:15:20.931 "data_size": 63488 00:15:20.931 }, 00:15:20.931 { 00:15:20.931 "name": "pt4", 00:15:20.931 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:20.931 "is_configured": true, 00:15:20.931 "data_offset": 2048, 00:15:20.931 "data_size": 63488 00:15:20.931 } 00:15:20.931 ] 00:15:20.931 }' 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.931 04:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.192 04:13:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:21.192 04:13:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:21.192 04:13:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.192 04:13:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.192 04:13:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.192 04:13:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:21.192 04:13:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:21.192 04:13:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:21.192 04:13:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.192 04:13:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.192 [2024-11-21 04:13:21.128483] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:21.192 04:13:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.192 04:13:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 9ee37413-c3aa-4392-8d6d-b53f610e718e '!=' 9ee37413-c3aa-4392-8d6d-b53f610e718e ']' 00:15:21.452 04:13:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 94602 00:15:21.452 04:13:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 94602 ']' 00:15:21.452 04:13:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 94602 00:15:21.452 04:13:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:21.452 04:13:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:21.452 04:13:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94602 00:15:21.452 killing process with pid 94602 00:15:21.452 04:13:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:21.452 04:13:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:21.452 04:13:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94602' 00:15:21.452 04:13:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 94602 00:15:21.452 [2024-11-21 04:13:21.203831] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:21.452 [2024-11-21 04:13:21.203896] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.452 [2024-11-21 04:13:21.203963] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.452 [2024-11-21 04:13:21.203972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:15:21.452 04:13:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 94602 00:15:21.452 [2024-11-21 04:13:21.281783] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:21.713 04:13:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:21.713 00:15:21.713 real 0m7.321s 00:15:21.713 user 0m12.056s 00:15:21.713 sys 0m1.700s 00:15:21.713 04:13:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:21.713 ************************************ 00:15:21.713 END TEST raid5f_superblock_test 00:15:21.713 ************************************ 00:15:21.713 04:13:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.713 04:13:21 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:21.713 04:13:21 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:15:21.713 04:13:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:21.713 04:13:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.713 04:13:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:21.974 ************************************ 00:15:21.974 START TEST raid5f_rebuild_test 00:15:21.974 ************************************ 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=95075 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 95075 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 95075 ']' 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:21.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:21.974 04:13:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.974 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:21.974 Zero copy mechanism will not be used. 00:15:21.974 [2024-11-21 04:13:21.779814] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:15:21.974 [2024-11-21 04:13:21.779922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95075 ] 00:15:21.974 [2024-11-21 04:13:21.934787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.235 [2024-11-21 04:13:21.974990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.235 [2024-11-21 04:13:22.051994] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.235 [2024-11-21 04:13:22.052040] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.806 BaseBdev1_malloc 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.806 [2024-11-21 04:13:22.619059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:22.806 [2024-11-21 04:13:22.619202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.806 [2024-11-21 04:13:22.619270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:22.806 [2024-11-21 04:13:22.619333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.806 [2024-11-21 04:13:22.621783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.806 [2024-11-21 04:13:22.621855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:22.806 BaseBdev1 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.806 BaseBdev2_malloc 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.806 [2024-11-21 04:13:22.653692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:22.806 [2024-11-21 04:13:22.653785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.806 [2024-11-21 04:13:22.653811] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:22.806 [2024-11-21 04:13:22.653820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.806 [2024-11-21 04:13:22.656181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.806 [2024-11-21 04:13:22.656237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:22.806 BaseBdev2 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.806 BaseBdev3_malloc 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.806 [2024-11-21 04:13:22.688372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:22.806 [2024-11-21 04:13:22.688473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.806 [2024-11-21 04:13:22.688514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:22.806 [2024-11-21 04:13:22.688540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.806 [2024-11-21 04:13:22.690921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.806 [2024-11-21 04:13:22.690988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:22.806 BaseBdev3 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.806 BaseBdev4_malloc 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.806 [2024-11-21 04:13:22.737702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:22.806 [2024-11-21 04:13:22.737797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.806 [2024-11-21 04:13:22.737854] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:22.806 [2024-11-21 04:13:22.737875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.806 BaseBdev4 00:15:22.806 [2024-11-21 04:13:22.741583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.806 [2024-11-21 04:13:22.741631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.806 spare_malloc 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.806 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.067 spare_delay 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.067 [2024-11-21 04:13:22.785795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:23.067 [2024-11-21 04:13:22.785892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.067 [2024-11-21 04:13:22.785957] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:23.067 [2024-11-21 04:13:22.785989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.067 [2024-11-21 04:13:22.788407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.067 [2024-11-21 04:13:22.788478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:23.067 spare 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.067 [2024-11-21 04:13:22.797878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:23.067 [2024-11-21 04:13:22.800063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:23.067 [2024-11-21 04:13:22.800166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:23.067 [2024-11-21 04:13:22.800279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:23.067 [2024-11-21 04:13:22.800404] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:23.067 [2024-11-21 04:13:22.800450] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:23.067 [2024-11-21 04:13:22.800737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:23.067 [2024-11-21 04:13:22.801254] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:23.067 [2024-11-21 04:13:22.801304] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:23.067 [2024-11-21 04:13:22.801479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.067 "name": "raid_bdev1", 00:15:23.067 "uuid": "db56b99c-eaed-4060-b0cd-1d56f54f7346", 00:15:23.067 "strip_size_kb": 64, 00:15:23.067 "state": "online", 00:15:23.067 "raid_level": "raid5f", 00:15:23.067 "superblock": false, 00:15:23.067 "num_base_bdevs": 4, 00:15:23.067 "num_base_bdevs_discovered": 4, 00:15:23.067 "num_base_bdevs_operational": 4, 00:15:23.067 "base_bdevs_list": [ 00:15:23.067 { 00:15:23.067 "name": "BaseBdev1", 00:15:23.067 "uuid": "64edde0a-de5a-5678-8c3f-c572386e4cf3", 00:15:23.067 "is_configured": true, 00:15:23.067 "data_offset": 0, 00:15:23.067 "data_size": 65536 00:15:23.067 }, 00:15:23.067 { 00:15:23.067 "name": "BaseBdev2", 00:15:23.067 "uuid": "3ff7dc5d-f261-57d3-84e6-b46a75bc5c49", 00:15:23.067 "is_configured": true, 00:15:23.067 "data_offset": 0, 00:15:23.067 "data_size": 65536 00:15:23.067 }, 00:15:23.067 { 00:15:23.067 "name": "BaseBdev3", 00:15:23.067 "uuid": "f298928f-5bfd-534a-a28d-f3165e163875", 00:15:23.067 "is_configured": true, 00:15:23.067 "data_offset": 0, 00:15:23.067 "data_size": 65536 00:15:23.067 }, 00:15:23.067 { 00:15:23.067 "name": "BaseBdev4", 00:15:23.067 "uuid": "cc68539f-c335-5a6a-a0ce-a96b5d564b7f", 00:15:23.067 "is_configured": true, 00:15:23.067 "data_offset": 0, 00:15:23.067 "data_size": 65536 00:15:23.067 } 00:15:23.067 ] 00:15:23.067 }' 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.067 04:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.328 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:23.328 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:23.328 04:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.328 04:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.328 [2024-11-21 04:13:23.255861] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.328 04:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.328 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:15:23.328 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.328 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:23.328 04:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.328 04:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.589 04:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.589 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:23.589 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:23.589 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:23.589 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:23.589 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:23.589 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:23.589 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:23.589 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:23.589 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:23.589 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:23.589 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:23.589 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:23.589 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:23.589 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:23.589 [2024-11-21 04:13:23.503300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:15:23.589 /dev/nbd0 00:15:23.850 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:23.850 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:23.850 04:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:23.850 04:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:23.850 04:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:23.850 04:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:23.850 04:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:23.850 04:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:23.850 04:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:23.850 04:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:23.850 04:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:23.850 1+0 records in 00:15:23.850 1+0 records out 00:15:23.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561159 s, 7.3 MB/s 00:15:23.850 04:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.850 04:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:23.850 04:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.850 04:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:23.850 04:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:23.850 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:23.850 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:23.850 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:23.850 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:23.850 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:23.850 04:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:15:24.110 512+0 records in 00:15:24.110 512+0 records out 00:15:24.110 100663296 bytes (101 MB, 96 MiB) copied, 0.433448 s, 232 MB/s 00:15:24.110 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:24.110 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:24.110 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:24.110 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:24.110 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:24.110 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:24.110 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:24.370 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:24.371 [2024-11-21 04:13:24.239863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.371 [2024-11-21 04:13:24.258855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.371 "name": "raid_bdev1", 00:15:24.371 "uuid": "db56b99c-eaed-4060-b0cd-1d56f54f7346", 00:15:24.371 "strip_size_kb": 64, 00:15:24.371 "state": "online", 00:15:24.371 "raid_level": "raid5f", 00:15:24.371 "superblock": false, 00:15:24.371 "num_base_bdevs": 4, 00:15:24.371 "num_base_bdevs_discovered": 3, 00:15:24.371 "num_base_bdevs_operational": 3, 00:15:24.371 "base_bdevs_list": [ 00:15:24.371 { 00:15:24.371 "name": null, 00:15:24.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.371 "is_configured": false, 00:15:24.371 "data_offset": 0, 00:15:24.371 "data_size": 65536 00:15:24.371 }, 00:15:24.371 { 00:15:24.371 "name": "BaseBdev2", 00:15:24.371 "uuid": "3ff7dc5d-f261-57d3-84e6-b46a75bc5c49", 00:15:24.371 "is_configured": true, 00:15:24.371 "data_offset": 0, 00:15:24.371 "data_size": 65536 00:15:24.371 }, 00:15:24.371 { 00:15:24.371 "name": "BaseBdev3", 00:15:24.371 "uuid": "f298928f-5bfd-534a-a28d-f3165e163875", 00:15:24.371 "is_configured": true, 00:15:24.371 "data_offset": 0, 00:15:24.371 "data_size": 65536 00:15:24.371 }, 00:15:24.371 { 00:15:24.371 "name": "BaseBdev4", 00:15:24.371 "uuid": "cc68539f-c335-5a6a-a0ce-a96b5d564b7f", 00:15:24.371 "is_configured": true, 00:15:24.371 "data_offset": 0, 00:15:24.371 "data_size": 65536 00:15:24.371 } 00:15:24.371 ] 00:15:24.371 }' 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.371 04:13:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.941 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:24.941 04:13:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.941 04:13:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.941 [2024-11-21 04:13:24.737999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.941 [2024-11-21 04:13:24.745547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:15:24.941 04:13:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.941 04:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:24.941 [2024-11-21 04:13:24.748107] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:25.881 04:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.881 04:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.881 04:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.881 04:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.881 04:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.882 04:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.882 04:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.882 04:13:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.882 04:13:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.882 04:13:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.882 04:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.882 "name": "raid_bdev1", 00:15:25.882 "uuid": "db56b99c-eaed-4060-b0cd-1d56f54f7346", 00:15:25.882 "strip_size_kb": 64, 00:15:25.882 "state": "online", 00:15:25.882 "raid_level": "raid5f", 00:15:25.882 "superblock": false, 00:15:25.882 "num_base_bdevs": 4, 00:15:25.882 "num_base_bdevs_discovered": 4, 00:15:25.882 "num_base_bdevs_operational": 4, 00:15:25.882 "process": { 00:15:25.882 "type": "rebuild", 00:15:25.882 "target": "spare", 00:15:25.882 "progress": { 00:15:25.882 "blocks": 19200, 00:15:25.882 "percent": 9 00:15:25.882 } 00:15:25.882 }, 00:15:25.882 "base_bdevs_list": [ 00:15:25.882 { 00:15:25.882 "name": "spare", 00:15:25.882 "uuid": "78067969-43ba-5e33-8d6b-26379e3c8327", 00:15:25.882 "is_configured": true, 00:15:25.882 "data_offset": 0, 00:15:25.882 "data_size": 65536 00:15:25.882 }, 00:15:25.882 { 00:15:25.882 "name": "BaseBdev2", 00:15:25.882 "uuid": "3ff7dc5d-f261-57d3-84e6-b46a75bc5c49", 00:15:25.882 "is_configured": true, 00:15:25.882 "data_offset": 0, 00:15:25.882 "data_size": 65536 00:15:25.882 }, 00:15:25.882 { 00:15:25.882 "name": "BaseBdev3", 00:15:25.882 "uuid": "f298928f-5bfd-534a-a28d-f3165e163875", 00:15:25.882 "is_configured": true, 00:15:25.882 "data_offset": 0, 00:15:25.882 "data_size": 65536 00:15:25.882 }, 00:15:25.882 { 00:15:25.882 "name": "BaseBdev4", 00:15:25.882 "uuid": "cc68539f-c335-5a6a-a0ce-a96b5d564b7f", 00:15:25.882 "is_configured": true, 00:15:25.882 "data_offset": 0, 00:15:25.882 "data_size": 65536 00:15:25.882 } 00:15:25.882 ] 00:15:25.882 }' 00:15:25.882 04:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.882 04:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.882 04:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.142 04:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.142 04:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:26.142 04:13:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.142 04:13:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.142 [2024-11-21 04:13:25.903741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:26.142 [2024-11-21 04:13:25.954843] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:26.142 [2024-11-21 04:13:25.954950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.142 [2024-11-21 04:13:25.954989] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:26.142 [2024-11-21 04:13:25.955009] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:26.142 04:13:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.142 04:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:26.142 04:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.142 04:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.142 04:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.142 04:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.142 04:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.142 04:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.142 04:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.142 04:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.142 04:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.142 04:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.142 04:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.142 04:13:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.142 04:13:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.142 04:13:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.142 04:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.142 "name": "raid_bdev1", 00:15:26.142 "uuid": "db56b99c-eaed-4060-b0cd-1d56f54f7346", 00:15:26.142 "strip_size_kb": 64, 00:15:26.142 "state": "online", 00:15:26.142 "raid_level": "raid5f", 00:15:26.142 "superblock": false, 00:15:26.142 "num_base_bdevs": 4, 00:15:26.142 "num_base_bdevs_discovered": 3, 00:15:26.142 "num_base_bdevs_operational": 3, 00:15:26.142 "base_bdevs_list": [ 00:15:26.142 { 00:15:26.142 "name": null, 00:15:26.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.142 "is_configured": false, 00:15:26.142 "data_offset": 0, 00:15:26.142 "data_size": 65536 00:15:26.142 }, 00:15:26.142 { 00:15:26.142 "name": "BaseBdev2", 00:15:26.142 "uuid": "3ff7dc5d-f261-57d3-84e6-b46a75bc5c49", 00:15:26.142 "is_configured": true, 00:15:26.142 "data_offset": 0, 00:15:26.142 "data_size": 65536 00:15:26.142 }, 00:15:26.142 { 00:15:26.142 "name": "BaseBdev3", 00:15:26.142 "uuid": "f298928f-5bfd-534a-a28d-f3165e163875", 00:15:26.142 "is_configured": true, 00:15:26.142 "data_offset": 0, 00:15:26.142 "data_size": 65536 00:15:26.142 }, 00:15:26.142 { 00:15:26.142 "name": "BaseBdev4", 00:15:26.142 "uuid": "cc68539f-c335-5a6a-a0ce-a96b5d564b7f", 00:15:26.142 "is_configured": true, 00:15:26.142 "data_offset": 0, 00:15:26.142 "data_size": 65536 00:15:26.142 } 00:15:26.142 ] 00:15:26.142 }' 00:15:26.142 04:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.142 04:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.712 04:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:26.712 04:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.712 04:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:26.712 04:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:26.712 04:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.712 04:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.712 04:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.712 04:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.712 04:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.712 04:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.712 04:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.712 "name": "raid_bdev1", 00:15:26.712 "uuid": "db56b99c-eaed-4060-b0cd-1d56f54f7346", 00:15:26.712 "strip_size_kb": 64, 00:15:26.713 "state": "online", 00:15:26.713 "raid_level": "raid5f", 00:15:26.713 "superblock": false, 00:15:26.713 "num_base_bdevs": 4, 00:15:26.713 "num_base_bdevs_discovered": 3, 00:15:26.713 "num_base_bdevs_operational": 3, 00:15:26.713 "base_bdevs_list": [ 00:15:26.713 { 00:15:26.713 "name": null, 00:15:26.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.713 "is_configured": false, 00:15:26.713 "data_offset": 0, 00:15:26.713 "data_size": 65536 00:15:26.713 }, 00:15:26.713 { 00:15:26.713 "name": "BaseBdev2", 00:15:26.713 "uuid": "3ff7dc5d-f261-57d3-84e6-b46a75bc5c49", 00:15:26.713 "is_configured": true, 00:15:26.713 "data_offset": 0, 00:15:26.713 "data_size": 65536 00:15:26.713 }, 00:15:26.713 { 00:15:26.713 "name": "BaseBdev3", 00:15:26.713 "uuid": "f298928f-5bfd-534a-a28d-f3165e163875", 00:15:26.713 "is_configured": true, 00:15:26.713 "data_offset": 0, 00:15:26.713 "data_size": 65536 00:15:26.713 }, 00:15:26.713 { 00:15:26.713 "name": "BaseBdev4", 00:15:26.713 "uuid": "cc68539f-c335-5a6a-a0ce-a96b5d564b7f", 00:15:26.713 "is_configured": true, 00:15:26.713 "data_offset": 0, 00:15:26.713 "data_size": 65536 00:15:26.713 } 00:15:26.713 ] 00:15:26.713 }' 00:15:26.713 04:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.713 04:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:26.713 04:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.713 04:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:26.713 04:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:26.713 04:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.713 04:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.713 [2024-11-21 04:13:26.599489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:26.713 [2024-11-21 04:13:26.605988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027e70 00:15:26.713 04:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.713 04:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:26.713 [2024-11-21 04:13:26.608545] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:27.653 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.653 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.653 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.653 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.653 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.653 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.653 04:13:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.653 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.653 04:13:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.913 04:13:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.913 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.913 "name": "raid_bdev1", 00:15:27.913 "uuid": "db56b99c-eaed-4060-b0cd-1d56f54f7346", 00:15:27.913 "strip_size_kb": 64, 00:15:27.913 "state": "online", 00:15:27.913 "raid_level": "raid5f", 00:15:27.913 "superblock": false, 00:15:27.913 "num_base_bdevs": 4, 00:15:27.913 "num_base_bdevs_discovered": 4, 00:15:27.913 "num_base_bdevs_operational": 4, 00:15:27.913 "process": { 00:15:27.913 "type": "rebuild", 00:15:27.913 "target": "spare", 00:15:27.913 "progress": { 00:15:27.913 "blocks": 19200, 00:15:27.913 "percent": 9 00:15:27.913 } 00:15:27.913 }, 00:15:27.913 "base_bdevs_list": [ 00:15:27.913 { 00:15:27.913 "name": "spare", 00:15:27.913 "uuid": "78067969-43ba-5e33-8d6b-26379e3c8327", 00:15:27.913 "is_configured": true, 00:15:27.913 "data_offset": 0, 00:15:27.913 "data_size": 65536 00:15:27.913 }, 00:15:27.913 { 00:15:27.913 "name": "BaseBdev2", 00:15:27.913 "uuid": "3ff7dc5d-f261-57d3-84e6-b46a75bc5c49", 00:15:27.913 "is_configured": true, 00:15:27.913 "data_offset": 0, 00:15:27.913 "data_size": 65536 00:15:27.913 }, 00:15:27.913 { 00:15:27.913 "name": "BaseBdev3", 00:15:27.913 "uuid": "f298928f-5bfd-534a-a28d-f3165e163875", 00:15:27.913 "is_configured": true, 00:15:27.913 "data_offset": 0, 00:15:27.913 "data_size": 65536 00:15:27.913 }, 00:15:27.913 { 00:15:27.913 "name": "BaseBdev4", 00:15:27.913 "uuid": "cc68539f-c335-5a6a-a0ce-a96b5d564b7f", 00:15:27.913 "is_configured": true, 00:15:27.913 "data_offset": 0, 00:15:27.913 "data_size": 65536 00:15:27.913 } 00:15:27.913 ] 00:15:27.913 }' 00:15:27.913 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.914 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.914 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.914 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.914 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:27.914 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:27.914 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:27.914 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=523 00:15:27.914 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:27.914 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.914 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.914 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.914 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.914 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.914 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.914 04:13:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.914 04:13:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.914 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.914 04:13:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.914 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.914 "name": "raid_bdev1", 00:15:27.914 "uuid": "db56b99c-eaed-4060-b0cd-1d56f54f7346", 00:15:27.914 "strip_size_kb": 64, 00:15:27.914 "state": "online", 00:15:27.914 "raid_level": "raid5f", 00:15:27.914 "superblock": false, 00:15:27.914 "num_base_bdevs": 4, 00:15:27.914 "num_base_bdevs_discovered": 4, 00:15:27.914 "num_base_bdevs_operational": 4, 00:15:27.914 "process": { 00:15:27.914 "type": "rebuild", 00:15:27.914 "target": "spare", 00:15:27.914 "progress": { 00:15:27.914 "blocks": 21120, 00:15:27.914 "percent": 10 00:15:27.914 } 00:15:27.914 }, 00:15:27.914 "base_bdevs_list": [ 00:15:27.914 { 00:15:27.914 "name": "spare", 00:15:27.914 "uuid": "78067969-43ba-5e33-8d6b-26379e3c8327", 00:15:27.914 "is_configured": true, 00:15:27.914 "data_offset": 0, 00:15:27.914 "data_size": 65536 00:15:27.914 }, 00:15:27.914 { 00:15:27.914 "name": "BaseBdev2", 00:15:27.914 "uuid": "3ff7dc5d-f261-57d3-84e6-b46a75bc5c49", 00:15:27.914 "is_configured": true, 00:15:27.914 "data_offset": 0, 00:15:27.914 "data_size": 65536 00:15:27.914 }, 00:15:27.914 { 00:15:27.914 "name": "BaseBdev3", 00:15:27.914 "uuid": "f298928f-5bfd-534a-a28d-f3165e163875", 00:15:27.914 "is_configured": true, 00:15:27.914 "data_offset": 0, 00:15:27.914 "data_size": 65536 00:15:27.914 }, 00:15:27.914 { 00:15:27.914 "name": "BaseBdev4", 00:15:27.914 "uuid": "cc68539f-c335-5a6a-a0ce-a96b5d564b7f", 00:15:27.914 "is_configured": true, 00:15:27.914 "data_offset": 0, 00:15:27.914 "data_size": 65536 00:15:27.914 } 00:15:27.914 ] 00:15:27.914 }' 00:15:27.914 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.914 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.914 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.174 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.174 04:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:29.116 04:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:29.116 04:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:29.116 04:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.116 04:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:29.116 04:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:29.116 04:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.116 04:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.116 04:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.116 04:13:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.116 04:13:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.116 04:13:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.116 04:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.116 "name": "raid_bdev1", 00:15:29.116 "uuid": "db56b99c-eaed-4060-b0cd-1d56f54f7346", 00:15:29.116 "strip_size_kb": 64, 00:15:29.116 "state": "online", 00:15:29.116 "raid_level": "raid5f", 00:15:29.116 "superblock": false, 00:15:29.116 "num_base_bdevs": 4, 00:15:29.116 "num_base_bdevs_discovered": 4, 00:15:29.116 "num_base_bdevs_operational": 4, 00:15:29.116 "process": { 00:15:29.116 "type": "rebuild", 00:15:29.116 "target": "spare", 00:15:29.116 "progress": { 00:15:29.116 "blocks": 44160, 00:15:29.116 "percent": 22 00:15:29.116 } 00:15:29.116 }, 00:15:29.116 "base_bdevs_list": [ 00:15:29.116 { 00:15:29.116 "name": "spare", 00:15:29.116 "uuid": "78067969-43ba-5e33-8d6b-26379e3c8327", 00:15:29.116 "is_configured": true, 00:15:29.116 "data_offset": 0, 00:15:29.116 "data_size": 65536 00:15:29.116 }, 00:15:29.116 { 00:15:29.116 "name": "BaseBdev2", 00:15:29.116 "uuid": "3ff7dc5d-f261-57d3-84e6-b46a75bc5c49", 00:15:29.116 "is_configured": true, 00:15:29.116 "data_offset": 0, 00:15:29.116 "data_size": 65536 00:15:29.116 }, 00:15:29.116 { 00:15:29.116 "name": "BaseBdev3", 00:15:29.116 "uuid": "f298928f-5bfd-534a-a28d-f3165e163875", 00:15:29.116 "is_configured": true, 00:15:29.116 "data_offset": 0, 00:15:29.116 "data_size": 65536 00:15:29.116 }, 00:15:29.116 { 00:15:29.116 "name": "BaseBdev4", 00:15:29.116 "uuid": "cc68539f-c335-5a6a-a0ce-a96b5d564b7f", 00:15:29.116 "is_configured": true, 00:15:29.116 "data_offset": 0, 00:15:29.116 "data_size": 65536 00:15:29.116 } 00:15:29.116 ] 00:15:29.116 }' 00:15:29.116 04:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.116 04:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:29.116 04:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.116 04:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.116 04:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:30.509 04:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:30.509 04:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.509 04:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.509 04:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.509 04:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.509 04:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.509 04:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.509 04:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.509 04:13:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.509 04:13:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.509 04:13:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.509 04:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.509 "name": "raid_bdev1", 00:15:30.509 "uuid": "db56b99c-eaed-4060-b0cd-1d56f54f7346", 00:15:30.509 "strip_size_kb": 64, 00:15:30.509 "state": "online", 00:15:30.509 "raid_level": "raid5f", 00:15:30.509 "superblock": false, 00:15:30.509 "num_base_bdevs": 4, 00:15:30.509 "num_base_bdevs_discovered": 4, 00:15:30.509 "num_base_bdevs_operational": 4, 00:15:30.509 "process": { 00:15:30.509 "type": "rebuild", 00:15:30.509 "target": "spare", 00:15:30.509 "progress": { 00:15:30.509 "blocks": 65280, 00:15:30.509 "percent": 33 00:15:30.509 } 00:15:30.509 }, 00:15:30.509 "base_bdevs_list": [ 00:15:30.510 { 00:15:30.510 "name": "spare", 00:15:30.510 "uuid": "78067969-43ba-5e33-8d6b-26379e3c8327", 00:15:30.510 "is_configured": true, 00:15:30.510 "data_offset": 0, 00:15:30.510 "data_size": 65536 00:15:30.510 }, 00:15:30.510 { 00:15:30.510 "name": "BaseBdev2", 00:15:30.510 "uuid": "3ff7dc5d-f261-57d3-84e6-b46a75bc5c49", 00:15:30.510 "is_configured": true, 00:15:30.510 "data_offset": 0, 00:15:30.510 "data_size": 65536 00:15:30.510 }, 00:15:30.510 { 00:15:30.510 "name": "BaseBdev3", 00:15:30.510 "uuid": "f298928f-5bfd-534a-a28d-f3165e163875", 00:15:30.510 "is_configured": true, 00:15:30.510 "data_offset": 0, 00:15:30.510 "data_size": 65536 00:15:30.510 }, 00:15:30.510 { 00:15:30.510 "name": "BaseBdev4", 00:15:30.510 "uuid": "cc68539f-c335-5a6a-a0ce-a96b5d564b7f", 00:15:30.510 "is_configured": true, 00:15:30.510 "data_offset": 0, 00:15:30.510 "data_size": 65536 00:15:30.510 } 00:15:30.510 ] 00:15:30.510 }' 00:15:30.510 04:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.510 04:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.510 04:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.510 04:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.510 04:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:31.478 04:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:31.478 04:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.478 04:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.478 04:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.478 04:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.478 04:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.478 04:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.478 04:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.478 04:13:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.478 04:13:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.478 04:13:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.478 04:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.478 "name": "raid_bdev1", 00:15:31.478 "uuid": "db56b99c-eaed-4060-b0cd-1d56f54f7346", 00:15:31.478 "strip_size_kb": 64, 00:15:31.478 "state": "online", 00:15:31.478 "raid_level": "raid5f", 00:15:31.478 "superblock": false, 00:15:31.478 "num_base_bdevs": 4, 00:15:31.478 "num_base_bdevs_discovered": 4, 00:15:31.478 "num_base_bdevs_operational": 4, 00:15:31.478 "process": { 00:15:31.478 "type": "rebuild", 00:15:31.478 "target": "spare", 00:15:31.478 "progress": { 00:15:31.478 "blocks": 86400, 00:15:31.478 "percent": 43 00:15:31.478 } 00:15:31.478 }, 00:15:31.478 "base_bdevs_list": [ 00:15:31.478 { 00:15:31.478 "name": "spare", 00:15:31.478 "uuid": "78067969-43ba-5e33-8d6b-26379e3c8327", 00:15:31.478 "is_configured": true, 00:15:31.478 "data_offset": 0, 00:15:31.478 "data_size": 65536 00:15:31.478 }, 00:15:31.478 { 00:15:31.478 "name": "BaseBdev2", 00:15:31.478 "uuid": "3ff7dc5d-f261-57d3-84e6-b46a75bc5c49", 00:15:31.478 "is_configured": true, 00:15:31.478 "data_offset": 0, 00:15:31.478 "data_size": 65536 00:15:31.478 }, 00:15:31.478 { 00:15:31.478 "name": "BaseBdev3", 00:15:31.478 "uuid": "f298928f-5bfd-534a-a28d-f3165e163875", 00:15:31.478 "is_configured": true, 00:15:31.478 "data_offset": 0, 00:15:31.478 "data_size": 65536 00:15:31.478 }, 00:15:31.478 { 00:15:31.478 "name": "BaseBdev4", 00:15:31.478 "uuid": "cc68539f-c335-5a6a-a0ce-a96b5d564b7f", 00:15:31.478 "is_configured": true, 00:15:31.478 "data_offset": 0, 00:15:31.478 "data_size": 65536 00:15:31.478 } 00:15:31.478 ] 00:15:31.478 }' 00:15:31.478 04:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.478 04:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:31.478 04:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.478 04:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.478 04:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:32.435 04:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:32.435 04:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.435 04:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.435 04:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.435 04:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.435 04:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.435 04:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.435 04:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.435 04:13:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.435 04:13:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.435 04:13:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.435 04:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.435 "name": "raid_bdev1", 00:15:32.435 "uuid": "db56b99c-eaed-4060-b0cd-1d56f54f7346", 00:15:32.435 "strip_size_kb": 64, 00:15:32.435 "state": "online", 00:15:32.435 "raid_level": "raid5f", 00:15:32.435 "superblock": false, 00:15:32.435 "num_base_bdevs": 4, 00:15:32.435 "num_base_bdevs_discovered": 4, 00:15:32.435 "num_base_bdevs_operational": 4, 00:15:32.435 "process": { 00:15:32.435 "type": "rebuild", 00:15:32.435 "target": "spare", 00:15:32.435 "progress": { 00:15:32.435 "blocks": 109440, 00:15:32.435 "percent": 55 00:15:32.435 } 00:15:32.435 }, 00:15:32.435 "base_bdevs_list": [ 00:15:32.435 { 00:15:32.435 "name": "spare", 00:15:32.435 "uuid": "78067969-43ba-5e33-8d6b-26379e3c8327", 00:15:32.435 "is_configured": true, 00:15:32.435 "data_offset": 0, 00:15:32.435 "data_size": 65536 00:15:32.435 }, 00:15:32.435 { 00:15:32.435 "name": "BaseBdev2", 00:15:32.435 "uuid": "3ff7dc5d-f261-57d3-84e6-b46a75bc5c49", 00:15:32.435 "is_configured": true, 00:15:32.435 "data_offset": 0, 00:15:32.435 "data_size": 65536 00:15:32.435 }, 00:15:32.435 { 00:15:32.435 "name": "BaseBdev3", 00:15:32.435 "uuid": "f298928f-5bfd-534a-a28d-f3165e163875", 00:15:32.435 "is_configured": true, 00:15:32.435 "data_offset": 0, 00:15:32.435 "data_size": 65536 00:15:32.435 }, 00:15:32.435 { 00:15:32.435 "name": "BaseBdev4", 00:15:32.435 "uuid": "cc68539f-c335-5a6a-a0ce-a96b5d564b7f", 00:15:32.435 "is_configured": true, 00:15:32.435 "data_offset": 0, 00:15:32.435 "data_size": 65536 00:15:32.435 } 00:15:32.435 ] 00:15:32.435 }' 00:15:32.435 04:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.696 04:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.696 04:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.696 04:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.696 04:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:33.637 04:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:33.637 04:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:33.637 04:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.637 04:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:33.637 04:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:33.637 04:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.637 04:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.637 04:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.637 04:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.637 04:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.637 04:13:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.637 04:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.637 "name": "raid_bdev1", 00:15:33.637 "uuid": "db56b99c-eaed-4060-b0cd-1d56f54f7346", 00:15:33.637 "strip_size_kb": 64, 00:15:33.637 "state": "online", 00:15:33.637 "raid_level": "raid5f", 00:15:33.637 "superblock": false, 00:15:33.637 "num_base_bdevs": 4, 00:15:33.637 "num_base_bdevs_discovered": 4, 00:15:33.637 "num_base_bdevs_operational": 4, 00:15:33.637 "process": { 00:15:33.637 "type": "rebuild", 00:15:33.637 "target": "spare", 00:15:33.637 "progress": { 00:15:33.637 "blocks": 130560, 00:15:33.637 "percent": 66 00:15:33.637 } 00:15:33.637 }, 00:15:33.637 "base_bdevs_list": [ 00:15:33.637 { 00:15:33.637 "name": "spare", 00:15:33.637 "uuid": "78067969-43ba-5e33-8d6b-26379e3c8327", 00:15:33.637 "is_configured": true, 00:15:33.637 "data_offset": 0, 00:15:33.637 "data_size": 65536 00:15:33.637 }, 00:15:33.637 { 00:15:33.637 "name": "BaseBdev2", 00:15:33.637 "uuid": "3ff7dc5d-f261-57d3-84e6-b46a75bc5c49", 00:15:33.637 "is_configured": true, 00:15:33.637 "data_offset": 0, 00:15:33.637 "data_size": 65536 00:15:33.637 }, 00:15:33.637 { 00:15:33.638 "name": "BaseBdev3", 00:15:33.638 "uuid": "f298928f-5bfd-534a-a28d-f3165e163875", 00:15:33.638 "is_configured": true, 00:15:33.638 "data_offset": 0, 00:15:33.638 "data_size": 65536 00:15:33.638 }, 00:15:33.638 { 00:15:33.638 "name": "BaseBdev4", 00:15:33.638 "uuid": "cc68539f-c335-5a6a-a0ce-a96b5d564b7f", 00:15:33.638 "is_configured": true, 00:15:33.638 "data_offset": 0, 00:15:33.638 "data_size": 65536 00:15:33.638 } 00:15:33.638 ] 00:15:33.638 }' 00:15:33.638 04:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.638 04:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:33.638 04:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.898 04:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:33.898 04:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:34.838 04:13:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:34.838 04:13:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.838 04:13:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.838 04:13:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.838 04:13:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.838 04:13:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.838 04:13:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.838 04:13:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.838 04:13:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.838 04:13:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.838 04:13:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.838 04:13:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.838 "name": "raid_bdev1", 00:15:34.838 "uuid": "db56b99c-eaed-4060-b0cd-1d56f54f7346", 00:15:34.838 "strip_size_kb": 64, 00:15:34.838 "state": "online", 00:15:34.838 "raid_level": "raid5f", 00:15:34.838 "superblock": false, 00:15:34.838 "num_base_bdevs": 4, 00:15:34.838 "num_base_bdevs_discovered": 4, 00:15:34.838 "num_base_bdevs_operational": 4, 00:15:34.838 "process": { 00:15:34.838 "type": "rebuild", 00:15:34.838 "target": "spare", 00:15:34.838 "progress": { 00:15:34.838 "blocks": 153600, 00:15:34.838 "percent": 78 00:15:34.838 } 00:15:34.838 }, 00:15:34.839 "base_bdevs_list": [ 00:15:34.839 { 00:15:34.839 "name": "spare", 00:15:34.839 "uuid": "78067969-43ba-5e33-8d6b-26379e3c8327", 00:15:34.839 "is_configured": true, 00:15:34.839 "data_offset": 0, 00:15:34.839 "data_size": 65536 00:15:34.839 }, 00:15:34.839 { 00:15:34.839 "name": "BaseBdev2", 00:15:34.839 "uuid": "3ff7dc5d-f261-57d3-84e6-b46a75bc5c49", 00:15:34.839 "is_configured": true, 00:15:34.839 "data_offset": 0, 00:15:34.839 "data_size": 65536 00:15:34.839 }, 00:15:34.839 { 00:15:34.839 "name": "BaseBdev3", 00:15:34.839 "uuid": "f298928f-5bfd-534a-a28d-f3165e163875", 00:15:34.839 "is_configured": true, 00:15:34.839 "data_offset": 0, 00:15:34.839 "data_size": 65536 00:15:34.839 }, 00:15:34.839 { 00:15:34.839 "name": "BaseBdev4", 00:15:34.839 "uuid": "cc68539f-c335-5a6a-a0ce-a96b5d564b7f", 00:15:34.839 "is_configured": true, 00:15:34.839 "data_offset": 0, 00:15:34.839 "data_size": 65536 00:15:34.839 } 00:15:34.839 ] 00:15:34.839 }' 00:15:34.839 04:13:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.839 04:13:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.839 04:13:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.839 04:13:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.839 04:13:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:36.221 04:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:36.221 04:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.221 04:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.221 04:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.221 04:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.221 04:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.221 04:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.221 04:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.221 04:13:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.221 04:13:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.221 04:13:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.221 04:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.221 "name": "raid_bdev1", 00:15:36.221 "uuid": "db56b99c-eaed-4060-b0cd-1d56f54f7346", 00:15:36.221 "strip_size_kb": 64, 00:15:36.221 "state": "online", 00:15:36.221 "raid_level": "raid5f", 00:15:36.221 "superblock": false, 00:15:36.221 "num_base_bdevs": 4, 00:15:36.222 "num_base_bdevs_discovered": 4, 00:15:36.222 "num_base_bdevs_operational": 4, 00:15:36.222 "process": { 00:15:36.222 "type": "rebuild", 00:15:36.222 "target": "spare", 00:15:36.222 "progress": { 00:15:36.222 "blocks": 174720, 00:15:36.222 "percent": 88 00:15:36.222 } 00:15:36.222 }, 00:15:36.222 "base_bdevs_list": [ 00:15:36.222 { 00:15:36.222 "name": "spare", 00:15:36.222 "uuid": "78067969-43ba-5e33-8d6b-26379e3c8327", 00:15:36.222 "is_configured": true, 00:15:36.222 "data_offset": 0, 00:15:36.222 "data_size": 65536 00:15:36.222 }, 00:15:36.222 { 00:15:36.222 "name": "BaseBdev2", 00:15:36.222 "uuid": "3ff7dc5d-f261-57d3-84e6-b46a75bc5c49", 00:15:36.222 "is_configured": true, 00:15:36.222 "data_offset": 0, 00:15:36.222 "data_size": 65536 00:15:36.222 }, 00:15:36.222 { 00:15:36.222 "name": "BaseBdev3", 00:15:36.222 "uuid": "f298928f-5bfd-534a-a28d-f3165e163875", 00:15:36.222 "is_configured": true, 00:15:36.222 "data_offset": 0, 00:15:36.222 "data_size": 65536 00:15:36.222 }, 00:15:36.222 { 00:15:36.222 "name": "BaseBdev4", 00:15:36.222 "uuid": "cc68539f-c335-5a6a-a0ce-a96b5d564b7f", 00:15:36.222 "is_configured": true, 00:15:36.222 "data_offset": 0, 00:15:36.222 "data_size": 65536 00:15:36.222 } 00:15:36.222 ] 00:15:36.222 }' 00:15:36.222 04:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.222 04:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.222 04:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.222 04:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.222 04:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:37.159 04:13:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:37.159 04:13:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.159 04:13:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.159 04:13:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.159 04:13:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.159 04:13:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.159 [2024-11-21 04:13:36.956637] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:37.159 [2024-11-21 04:13:36.956729] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:37.159 [2024-11-21 04:13:36.956771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.159 04:13:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.159 04:13:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.159 04:13:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.159 04:13:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.159 04:13:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.159 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.159 "name": "raid_bdev1", 00:15:37.159 "uuid": "db56b99c-eaed-4060-b0cd-1d56f54f7346", 00:15:37.159 "strip_size_kb": 64, 00:15:37.159 "state": "online", 00:15:37.159 "raid_level": "raid5f", 00:15:37.159 "superblock": false, 00:15:37.159 "num_base_bdevs": 4, 00:15:37.159 "num_base_bdevs_discovered": 4, 00:15:37.159 "num_base_bdevs_operational": 4, 00:15:37.159 "base_bdevs_list": [ 00:15:37.159 { 00:15:37.159 "name": "spare", 00:15:37.159 "uuid": "78067969-43ba-5e33-8d6b-26379e3c8327", 00:15:37.159 "is_configured": true, 00:15:37.159 "data_offset": 0, 00:15:37.159 "data_size": 65536 00:15:37.159 }, 00:15:37.159 { 00:15:37.159 "name": "BaseBdev2", 00:15:37.159 "uuid": "3ff7dc5d-f261-57d3-84e6-b46a75bc5c49", 00:15:37.159 "is_configured": true, 00:15:37.159 "data_offset": 0, 00:15:37.159 "data_size": 65536 00:15:37.159 }, 00:15:37.159 { 00:15:37.159 "name": "BaseBdev3", 00:15:37.159 "uuid": "f298928f-5bfd-534a-a28d-f3165e163875", 00:15:37.159 "is_configured": true, 00:15:37.159 "data_offset": 0, 00:15:37.159 "data_size": 65536 00:15:37.159 }, 00:15:37.159 { 00:15:37.159 "name": "BaseBdev4", 00:15:37.159 "uuid": "cc68539f-c335-5a6a-a0ce-a96b5d564b7f", 00:15:37.159 "is_configured": true, 00:15:37.159 "data_offset": 0, 00:15:37.159 "data_size": 65536 00:15:37.159 } 00:15:37.159 ] 00:15:37.159 }' 00:15:37.160 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.160 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:37.160 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.160 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:37.160 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:37.160 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:37.160 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.160 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:37.160 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:37.160 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.160 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.160 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.160 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.160 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.420 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.420 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.420 "name": "raid_bdev1", 00:15:37.420 "uuid": "db56b99c-eaed-4060-b0cd-1d56f54f7346", 00:15:37.420 "strip_size_kb": 64, 00:15:37.420 "state": "online", 00:15:37.420 "raid_level": "raid5f", 00:15:37.420 "superblock": false, 00:15:37.420 "num_base_bdevs": 4, 00:15:37.420 "num_base_bdevs_discovered": 4, 00:15:37.420 "num_base_bdevs_operational": 4, 00:15:37.420 "base_bdevs_list": [ 00:15:37.420 { 00:15:37.420 "name": "spare", 00:15:37.420 "uuid": "78067969-43ba-5e33-8d6b-26379e3c8327", 00:15:37.420 "is_configured": true, 00:15:37.420 "data_offset": 0, 00:15:37.420 "data_size": 65536 00:15:37.420 }, 00:15:37.420 { 00:15:37.420 "name": "BaseBdev2", 00:15:37.420 "uuid": "3ff7dc5d-f261-57d3-84e6-b46a75bc5c49", 00:15:37.420 "is_configured": true, 00:15:37.420 "data_offset": 0, 00:15:37.420 "data_size": 65536 00:15:37.420 }, 00:15:37.420 { 00:15:37.420 "name": "BaseBdev3", 00:15:37.420 "uuid": "f298928f-5bfd-534a-a28d-f3165e163875", 00:15:37.420 "is_configured": true, 00:15:37.420 "data_offset": 0, 00:15:37.420 "data_size": 65536 00:15:37.420 }, 00:15:37.420 { 00:15:37.420 "name": "BaseBdev4", 00:15:37.420 "uuid": "cc68539f-c335-5a6a-a0ce-a96b5d564b7f", 00:15:37.420 "is_configured": true, 00:15:37.420 "data_offset": 0, 00:15:37.420 "data_size": 65536 00:15:37.420 } 00:15:37.420 ] 00:15:37.420 }' 00:15:37.420 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.420 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:37.420 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.420 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:37.420 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:37.420 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.420 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.420 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.420 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.420 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:37.420 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.420 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.420 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.420 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.420 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.420 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.420 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.420 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.420 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.420 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.420 "name": "raid_bdev1", 00:15:37.420 "uuid": "db56b99c-eaed-4060-b0cd-1d56f54f7346", 00:15:37.420 "strip_size_kb": 64, 00:15:37.420 "state": "online", 00:15:37.420 "raid_level": "raid5f", 00:15:37.420 "superblock": false, 00:15:37.420 "num_base_bdevs": 4, 00:15:37.420 "num_base_bdevs_discovered": 4, 00:15:37.420 "num_base_bdevs_operational": 4, 00:15:37.420 "base_bdevs_list": [ 00:15:37.420 { 00:15:37.420 "name": "spare", 00:15:37.420 "uuid": "78067969-43ba-5e33-8d6b-26379e3c8327", 00:15:37.420 "is_configured": true, 00:15:37.420 "data_offset": 0, 00:15:37.420 "data_size": 65536 00:15:37.420 }, 00:15:37.420 { 00:15:37.420 "name": "BaseBdev2", 00:15:37.421 "uuid": "3ff7dc5d-f261-57d3-84e6-b46a75bc5c49", 00:15:37.421 "is_configured": true, 00:15:37.421 "data_offset": 0, 00:15:37.421 "data_size": 65536 00:15:37.421 }, 00:15:37.421 { 00:15:37.421 "name": "BaseBdev3", 00:15:37.421 "uuid": "f298928f-5bfd-534a-a28d-f3165e163875", 00:15:37.421 "is_configured": true, 00:15:37.421 "data_offset": 0, 00:15:37.421 "data_size": 65536 00:15:37.421 }, 00:15:37.421 { 00:15:37.421 "name": "BaseBdev4", 00:15:37.421 "uuid": "cc68539f-c335-5a6a-a0ce-a96b5d564b7f", 00:15:37.421 "is_configured": true, 00:15:37.421 "data_offset": 0, 00:15:37.421 "data_size": 65536 00:15:37.421 } 00:15:37.421 ] 00:15:37.421 }' 00:15:37.421 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.421 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.680 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:37.680 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.680 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.680 [2024-11-21 04:13:37.625328] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:37.680 [2024-11-21 04:13:37.625366] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.680 [2024-11-21 04:13:37.625457] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.680 [2024-11-21 04:13:37.625604] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.680 [2024-11-21 04:13:37.625641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:37.680 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.680 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.680 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:37.680 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.680 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.680 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.939 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:37.939 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:37.939 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:37.940 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:37.940 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:37.940 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:37.940 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:37.940 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:37.940 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:37.940 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:37.940 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:37.940 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:37.940 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:37.940 /dev/nbd0 00:15:37.940 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:37.940 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:37.940 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:37.940 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:37.940 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:37.940 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:37.940 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:37.940 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:37.940 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:37.940 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:37.940 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.940 1+0 records in 00:15:37.940 1+0 records out 00:15:37.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459631 s, 8.9 MB/s 00:15:37.940 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.198 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:38.198 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.198 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:38.198 04:13:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:38.198 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:38.198 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:38.198 04:13:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:38.198 /dev/nbd1 00:15:38.198 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:38.198 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:38.198 04:13:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:38.198 04:13:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:38.198 04:13:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:38.198 04:13:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:38.198 04:13:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:38.198 04:13:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:38.198 04:13:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:38.198 04:13:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:38.198 04:13:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:38.198 1+0 records in 00:15:38.198 1+0 records out 00:15:38.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461776 s, 8.9 MB/s 00:15:38.198 04:13:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.198 04:13:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:38.198 04:13:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.458 04:13:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:38.458 04:13:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:38.458 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:38.458 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:38.458 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:38.458 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:38.458 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:38.458 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:38.458 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:38.458 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:38.458 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.458 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:38.718 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:38.718 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:38.718 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:38.718 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.718 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.718 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:38.718 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:38.718 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.719 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.719 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:38.719 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:38.719 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:38.719 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:38.719 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.719 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.719 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:38.719 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:38.719 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.719 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:38.719 04:13:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 95075 00:15:38.719 04:13:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 95075 ']' 00:15:38.719 04:13:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 95075 00:15:38.719 04:13:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:38.719 04:13:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:38.719 04:13:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95075 00:15:38.979 04:13:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:38.979 04:13:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:38.979 04:13:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95075' 00:15:38.979 killing process with pid 95075 00:15:38.979 04:13:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 95075 00:15:38.979 Received shutdown signal, test time was about 60.000000 seconds 00:15:38.979 00:15:38.979 Latency(us) 00:15:38.979 [2024-11-21T04:13:38.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.979 [2024-11-21T04:13:38.952Z] =================================================================================================================== 00:15:38.979 [2024-11-21T04:13:38.952Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:38.979 [2024-11-21 04:13:38.721185] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:38.979 04:13:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 95075 00:15:38.979 [2024-11-21 04:13:38.811800] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:39.240 04:13:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:39.240 00:15:39.240 real 0m17.431s 00:15:39.240 user 0m21.000s 00:15:39.240 sys 0m2.402s 00:15:39.240 04:13:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:39.240 04:13:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.240 ************************************ 00:15:39.240 END TEST raid5f_rebuild_test 00:15:39.240 ************************************ 00:15:39.240 04:13:39 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:15:39.240 04:13:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:39.240 04:13:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.240 04:13:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:39.240 ************************************ 00:15:39.240 START TEST raid5f_rebuild_test_sb 00:15:39.240 ************************************ 00:15:39.240 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:15:39.240 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:39.240 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:39.240 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:39.240 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:39.240 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:39.240 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:39.240 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:39.240 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:39.240 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:39.240 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:39.240 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:39.240 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:39.240 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:39.240 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:39.240 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:39.240 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:39.501 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:39.501 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:39.501 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:39.501 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:39.501 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:39.501 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:39.501 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:39.501 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:39.501 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:39.501 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:39.501 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:39.501 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:39.501 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:39.501 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:39.501 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:39.501 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:39.501 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=95562 00:15:39.501 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 95562 00:15:39.501 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:39.501 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 95562 ']' 00:15:39.501 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.501 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.501 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.501 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.501 04:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.501 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:39.501 Zero copy mechanism will not be used. 00:15:39.501 [2024-11-21 04:13:39.308990] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:15:39.501 [2024-11-21 04:13:39.309123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95562 ] 00:15:39.501 [2024-11-21 04:13:39.442857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.761 [2024-11-21 04:13:39.481214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.761 [2024-11-21 04:13:39.558936] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.761 [2024-11-21 04:13:39.558973] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:40.330 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:40.330 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:40.330 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:40.330 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:40.330 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.330 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.330 BaseBdev1_malloc 00:15:40.330 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.330 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:40.330 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.330 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.330 [2024-11-21 04:13:40.162415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:40.330 [2024-11-21 04:13:40.162479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.330 [2024-11-21 04:13:40.162512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:40.330 [2024-11-21 04:13:40.162525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.330 [2024-11-21 04:13:40.164889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.330 [2024-11-21 04:13:40.164924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:40.330 BaseBdev1 00:15:40.330 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.331 BaseBdev2_malloc 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.331 [2024-11-21 04:13:40.197073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:40.331 [2024-11-21 04:13:40.197119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.331 [2024-11-21 04:13:40.197142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:40.331 [2024-11-21 04:13:40.197150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.331 [2024-11-21 04:13:40.199459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.331 [2024-11-21 04:13:40.199495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:40.331 BaseBdev2 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.331 BaseBdev3_malloc 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.331 [2024-11-21 04:13:40.231710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:40.331 [2024-11-21 04:13:40.231760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.331 [2024-11-21 04:13:40.231786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:40.331 [2024-11-21 04:13:40.231794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.331 [2024-11-21 04:13:40.234100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.331 [2024-11-21 04:13:40.234130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:40.331 BaseBdev3 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.331 BaseBdev4_malloc 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.331 [2024-11-21 04:13:40.284596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:40.331 [2024-11-21 04:13:40.284665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.331 [2024-11-21 04:13:40.284701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:40.331 [2024-11-21 04:13:40.284716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.331 [2024-11-21 04:13:40.288437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.331 [2024-11-21 04:13:40.288485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:40.331 BaseBdev4 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.331 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.591 spare_malloc 00:15:40.591 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.591 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:40.591 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.591 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.591 spare_delay 00:15:40.591 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.592 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:40.592 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.592 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.592 [2024-11-21 04:13:40.332480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:40.592 [2024-11-21 04:13:40.332521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.592 [2024-11-21 04:13:40.332540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:40.592 [2024-11-21 04:13:40.332549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.592 [2024-11-21 04:13:40.334905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.592 [2024-11-21 04:13:40.334936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:40.592 spare 00:15:40.592 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.592 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:40.592 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.592 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.592 [2024-11-21 04:13:40.344554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:40.592 [2024-11-21 04:13:40.346655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:40.592 [2024-11-21 04:13:40.346717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:40.592 [2024-11-21 04:13:40.346761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:40.592 [2024-11-21 04:13:40.346939] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:40.592 [2024-11-21 04:13:40.346951] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:40.592 [2024-11-21 04:13:40.347228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:40.592 [2024-11-21 04:13:40.347742] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:40.592 [2024-11-21 04:13:40.347764] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:40.592 [2024-11-21 04:13:40.347879] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.592 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.592 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:40.592 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.592 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.592 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.592 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.592 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:40.592 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.592 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.592 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.592 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.592 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.592 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.592 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.592 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.592 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.592 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.592 "name": "raid_bdev1", 00:15:40.592 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:15:40.592 "strip_size_kb": 64, 00:15:40.592 "state": "online", 00:15:40.592 "raid_level": "raid5f", 00:15:40.592 "superblock": true, 00:15:40.592 "num_base_bdevs": 4, 00:15:40.592 "num_base_bdevs_discovered": 4, 00:15:40.592 "num_base_bdevs_operational": 4, 00:15:40.592 "base_bdevs_list": [ 00:15:40.592 { 00:15:40.592 "name": "BaseBdev1", 00:15:40.592 "uuid": "c8c63558-5c94-50a2-8676-b05a1b053b0a", 00:15:40.592 "is_configured": true, 00:15:40.592 "data_offset": 2048, 00:15:40.592 "data_size": 63488 00:15:40.592 }, 00:15:40.592 { 00:15:40.592 "name": "BaseBdev2", 00:15:40.592 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:15:40.592 "is_configured": true, 00:15:40.592 "data_offset": 2048, 00:15:40.592 "data_size": 63488 00:15:40.592 }, 00:15:40.592 { 00:15:40.592 "name": "BaseBdev3", 00:15:40.592 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:15:40.592 "is_configured": true, 00:15:40.592 "data_offset": 2048, 00:15:40.592 "data_size": 63488 00:15:40.592 }, 00:15:40.592 { 00:15:40.592 "name": "BaseBdev4", 00:15:40.592 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:15:40.592 "is_configured": true, 00:15:40.592 "data_offset": 2048, 00:15:40.592 "data_size": 63488 00:15:40.592 } 00:15:40.592 ] 00:15:40.592 }' 00:15:40.592 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.592 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.852 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:41.113 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:41.113 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.113 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.113 [2024-11-21 04:13:40.830195] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:41.113 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.113 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:15:41.113 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.113 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:41.113 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.113 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.113 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.113 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:41.113 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:41.113 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:41.113 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:41.113 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:41.113 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:41.113 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:41.113 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:41.113 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:41.113 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:41.113 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:41.113 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:41.113 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:41.113 04:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:41.374 [2024-11-21 04:13:41.085646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:15:41.374 /dev/nbd0 00:15:41.374 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:41.374 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:41.374 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:41.374 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:41.374 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:41.374 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:41.374 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:41.374 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:41.374 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:41.374 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:41.374 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:41.374 1+0 records in 00:15:41.374 1+0 records out 00:15:41.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482148 s, 8.5 MB/s 00:15:41.374 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.374 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:41.374 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.374 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:41.374 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:41.374 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:41.374 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:41.374 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:41.374 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:41.374 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:41.374 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:15:41.634 496+0 records in 00:15:41.634 496+0 records out 00:15:41.634 97517568 bytes (98 MB, 93 MiB) copied, 0.406439 s, 240 MB/s 00:15:41.634 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:41.634 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:41.634 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:41.634 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:41.634 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:41.634 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:41.634 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:41.894 [2024-11-21 04:13:41.768424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.894 [2024-11-21 04:13:41.784494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.894 "name": "raid_bdev1", 00:15:41.894 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:15:41.894 "strip_size_kb": 64, 00:15:41.894 "state": "online", 00:15:41.894 "raid_level": "raid5f", 00:15:41.894 "superblock": true, 00:15:41.894 "num_base_bdevs": 4, 00:15:41.894 "num_base_bdevs_discovered": 3, 00:15:41.894 "num_base_bdevs_operational": 3, 00:15:41.894 "base_bdevs_list": [ 00:15:41.894 { 00:15:41.894 "name": null, 00:15:41.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.894 "is_configured": false, 00:15:41.894 "data_offset": 0, 00:15:41.894 "data_size": 63488 00:15:41.894 }, 00:15:41.894 { 00:15:41.894 "name": "BaseBdev2", 00:15:41.894 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:15:41.894 "is_configured": true, 00:15:41.894 "data_offset": 2048, 00:15:41.894 "data_size": 63488 00:15:41.894 }, 00:15:41.894 { 00:15:41.894 "name": "BaseBdev3", 00:15:41.894 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:15:41.894 "is_configured": true, 00:15:41.894 "data_offset": 2048, 00:15:41.894 "data_size": 63488 00:15:41.894 }, 00:15:41.894 { 00:15:41.894 "name": "BaseBdev4", 00:15:41.894 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:15:41.894 "is_configured": true, 00:15:41.894 "data_offset": 2048, 00:15:41.894 "data_size": 63488 00:15:41.894 } 00:15:41.894 ] 00:15:41.894 }' 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.894 04:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.464 04:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:42.464 04:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.464 04:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.464 [2024-11-21 04:13:42.235888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:42.464 [2024-11-21 04:13:42.243308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000270a0 00:15:42.464 04:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.464 04:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:42.464 [2024-11-21 04:13:42.245893] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:43.403 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.403 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.403 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.403 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.403 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.403 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.403 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.403 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.403 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.403 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.403 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.403 "name": "raid_bdev1", 00:15:43.403 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:15:43.403 "strip_size_kb": 64, 00:15:43.403 "state": "online", 00:15:43.403 "raid_level": "raid5f", 00:15:43.403 "superblock": true, 00:15:43.403 "num_base_bdevs": 4, 00:15:43.403 "num_base_bdevs_discovered": 4, 00:15:43.403 "num_base_bdevs_operational": 4, 00:15:43.403 "process": { 00:15:43.403 "type": "rebuild", 00:15:43.403 "target": "spare", 00:15:43.403 "progress": { 00:15:43.403 "blocks": 19200, 00:15:43.403 "percent": 10 00:15:43.403 } 00:15:43.403 }, 00:15:43.403 "base_bdevs_list": [ 00:15:43.403 { 00:15:43.403 "name": "spare", 00:15:43.403 "uuid": "539e8c41-e369-559a-a008-1f8f8850f2e3", 00:15:43.403 "is_configured": true, 00:15:43.403 "data_offset": 2048, 00:15:43.403 "data_size": 63488 00:15:43.403 }, 00:15:43.403 { 00:15:43.403 "name": "BaseBdev2", 00:15:43.403 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:15:43.403 "is_configured": true, 00:15:43.403 "data_offset": 2048, 00:15:43.403 "data_size": 63488 00:15:43.403 }, 00:15:43.403 { 00:15:43.403 "name": "BaseBdev3", 00:15:43.403 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:15:43.403 "is_configured": true, 00:15:43.403 "data_offset": 2048, 00:15:43.403 "data_size": 63488 00:15:43.403 }, 00:15:43.403 { 00:15:43.403 "name": "BaseBdev4", 00:15:43.403 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:15:43.403 "is_configured": true, 00:15:43.403 "data_offset": 2048, 00:15:43.403 "data_size": 63488 00:15:43.403 } 00:15:43.403 ] 00:15:43.403 }' 00:15:43.403 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.403 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.403 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.664 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.664 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:43.664 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.664 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.664 [2024-11-21 04:13:43.413085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:43.664 [2024-11-21 04:13:43.452357] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:43.664 [2024-11-21 04:13:43.452424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.664 [2024-11-21 04:13:43.452444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:43.664 [2024-11-21 04:13:43.452451] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:43.664 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.664 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:43.664 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.664 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.664 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.664 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.664 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.664 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.664 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.664 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.664 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.664 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.664 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.664 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.664 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.664 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.664 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.664 "name": "raid_bdev1", 00:15:43.664 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:15:43.664 "strip_size_kb": 64, 00:15:43.664 "state": "online", 00:15:43.664 "raid_level": "raid5f", 00:15:43.664 "superblock": true, 00:15:43.665 "num_base_bdevs": 4, 00:15:43.665 "num_base_bdevs_discovered": 3, 00:15:43.665 "num_base_bdevs_operational": 3, 00:15:43.665 "base_bdevs_list": [ 00:15:43.665 { 00:15:43.665 "name": null, 00:15:43.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.665 "is_configured": false, 00:15:43.665 "data_offset": 0, 00:15:43.665 "data_size": 63488 00:15:43.665 }, 00:15:43.665 { 00:15:43.665 "name": "BaseBdev2", 00:15:43.665 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:15:43.665 "is_configured": true, 00:15:43.665 "data_offset": 2048, 00:15:43.665 "data_size": 63488 00:15:43.665 }, 00:15:43.665 { 00:15:43.665 "name": "BaseBdev3", 00:15:43.665 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:15:43.665 "is_configured": true, 00:15:43.665 "data_offset": 2048, 00:15:43.665 "data_size": 63488 00:15:43.665 }, 00:15:43.665 { 00:15:43.665 "name": "BaseBdev4", 00:15:43.665 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:15:43.665 "is_configured": true, 00:15:43.665 "data_offset": 2048, 00:15:43.665 "data_size": 63488 00:15:43.665 } 00:15:43.665 ] 00:15:43.665 }' 00:15:43.665 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.665 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.236 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:44.236 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.236 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:44.236 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:44.236 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.236 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.236 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.236 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.236 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.236 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.236 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.236 "name": "raid_bdev1", 00:15:44.236 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:15:44.236 "strip_size_kb": 64, 00:15:44.236 "state": "online", 00:15:44.236 "raid_level": "raid5f", 00:15:44.236 "superblock": true, 00:15:44.236 "num_base_bdevs": 4, 00:15:44.236 "num_base_bdevs_discovered": 3, 00:15:44.236 "num_base_bdevs_operational": 3, 00:15:44.236 "base_bdevs_list": [ 00:15:44.236 { 00:15:44.236 "name": null, 00:15:44.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.236 "is_configured": false, 00:15:44.236 "data_offset": 0, 00:15:44.236 "data_size": 63488 00:15:44.236 }, 00:15:44.236 { 00:15:44.236 "name": "BaseBdev2", 00:15:44.236 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:15:44.236 "is_configured": true, 00:15:44.236 "data_offset": 2048, 00:15:44.236 "data_size": 63488 00:15:44.236 }, 00:15:44.236 { 00:15:44.236 "name": "BaseBdev3", 00:15:44.236 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:15:44.236 "is_configured": true, 00:15:44.236 "data_offset": 2048, 00:15:44.236 "data_size": 63488 00:15:44.236 }, 00:15:44.236 { 00:15:44.236 "name": "BaseBdev4", 00:15:44.236 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:15:44.236 "is_configured": true, 00:15:44.236 "data_offset": 2048, 00:15:44.236 "data_size": 63488 00:15:44.236 } 00:15:44.236 ] 00:15:44.236 }' 00:15:44.236 04:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.236 04:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:44.236 04:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.236 04:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:44.236 04:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:44.236 04:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.236 04:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.236 [2024-11-21 04:13:44.053267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:44.236 [2024-11-21 04:13:44.060049] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027170 00:15:44.236 04:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.236 04:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:44.236 [2024-11-21 04:13:44.062586] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:45.177 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.177 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.177 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.177 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.177 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.177 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.177 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.177 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.177 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.177 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.177 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.177 "name": "raid_bdev1", 00:15:45.177 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:15:45.177 "strip_size_kb": 64, 00:15:45.177 "state": "online", 00:15:45.177 "raid_level": "raid5f", 00:15:45.177 "superblock": true, 00:15:45.177 "num_base_bdevs": 4, 00:15:45.177 "num_base_bdevs_discovered": 4, 00:15:45.177 "num_base_bdevs_operational": 4, 00:15:45.177 "process": { 00:15:45.177 "type": "rebuild", 00:15:45.177 "target": "spare", 00:15:45.177 "progress": { 00:15:45.177 "blocks": 19200, 00:15:45.177 "percent": 10 00:15:45.177 } 00:15:45.177 }, 00:15:45.177 "base_bdevs_list": [ 00:15:45.177 { 00:15:45.177 "name": "spare", 00:15:45.177 "uuid": "539e8c41-e369-559a-a008-1f8f8850f2e3", 00:15:45.177 "is_configured": true, 00:15:45.177 "data_offset": 2048, 00:15:45.177 "data_size": 63488 00:15:45.177 }, 00:15:45.177 { 00:15:45.177 "name": "BaseBdev2", 00:15:45.177 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:15:45.177 "is_configured": true, 00:15:45.177 "data_offset": 2048, 00:15:45.177 "data_size": 63488 00:15:45.177 }, 00:15:45.177 { 00:15:45.177 "name": "BaseBdev3", 00:15:45.177 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:15:45.177 "is_configured": true, 00:15:45.177 "data_offset": 2048, 00:15:45.177 "data_size": 63488 00:15:45.177 }, 00:15:45.177 { 00:15:45.177 "name": "BaseBdev4", 00:15:45.177 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:15:45.177 "is_configured": true, 00:15:45.177 "data_offset": 2048, 00:15:45.177 "data_size": 63488 00:15:45.177 } 00:15:45.177 ] 00:15:45.177 }' 00:15:45.177 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.177 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.438 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.438 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.438 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:45.438 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:45.438 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:45.438 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:45.438 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:45.438 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=541 00:15:45.438 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:45.438 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.438 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.438 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.438 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.438 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.438 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.438 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.438 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.438 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.438 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.438 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.438 "name": "raid_bdev1", 00:15:45.438 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:15:45.438 "strip_size_kb": 64, 00:15:45.438 "state": "online", 00:15:45.438 "raid_level": "raid5f", 00:15:45.438 "superblock": true, 00:15:45.438 "num_base_bdevs": 4, 00:15:45.438 "num_base_bdevs_discovered": 4, 00:15:45.438 "num_base_bdevs_operational": 4, 00:15:45.438 "process": { 00:15:45.438 "type": "rebuild", 00:15:45.438 "target": "spare", 00:15:45.438 "progress": { 00:15:45.438 "blocks": 21120, 00:15:45.438 "percent": 11 00:15:45.438 } 00:15:45.438 }, 00:15:45.438 "base_bdevs_list": [ 00:15:45.438 { 00:15:45.438 "name": "spare", 00:15:45.438 "uuid": "539e8c41-e369-559a-a008-1f8f8850f2e3", 00:15:45.438 "is_configured": true, 00:15:45.438 "data_offset": 2048, 00:15:45.438 "data_size": 63488 00:15:45.438 }, 00:15:45.438 { 00:15:45.438 "name": "BaseBdev2", 00:15:45.438 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:15:45.438 "is_configured": true, 00:15:45.438 "data_offset": 2048, 00:15:45.438 "data_size": 63488 00:15:45.438 }, 00:15:45.438 { 00:15:45.438 "name": "BaseBdev3", 00:15:45.438 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:15:45.438 "is_configured": true, 00:15:45.438 "data_offset": 2048, 00:15:45.438 "data_size": 63488 00:15:45.438 }, 00:15:45.438 { 00:15:45.438 "name": "BaseBdev4", 00:15:45.438 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:15:45.438 "is_configured": true, 00:15:45.438 "data_offset": 2048, 00:15:45.438 "data_size": 63488 00:15:45.438 } 00:15:45.438 ] 00:15:45.438 }' 00:15:45.438 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.438 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.438 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.438 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.438 04:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:46.821 04:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:46.821 04:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.821 04:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.821 04:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.821 04:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.821 04:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.821 04:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.821 04:13:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.821 04:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.821 04:13:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.821 04:13:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.821 04:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.821 "name": "raid_bdev1", 00:15:46.821 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:15:46.821 "strip_size_kb": 64, 00:15:46.821 "state": "online", 00:15:46.821 "raid_level": "raid5f", 00:15:46.821 "superblock": true, 00:15:46.821 "num_base_bdevs": 4, 00:15:46.821 "num_base_bdevs_discovered": 4, 00:15:46.821 "num_base_bdevs_operational": 4, 00:15:46.821 "process": { 00:15:46.821 "type": "rebuild", 00:15:46.821 "target": "spare", 00:15:46.821 "progress": { 00:15:46.821 "blocks": 42240, 00:15:46.821 "percent": 22 00:15:46.821 } 00:15:46.821 }, 00:15:46.821 "base_bdevs_list": [ 00:15:46.821 { 00:15:46.821 "name": "spare", 00:15:46.821 "uuid": "539e8c41-e369-559a-a008-1f8f8850f2e3", 00:15:46.821 "is_configured": true, 00:15:46.821 "data_offset": 2048, 00:15:46.821 "data_size": 63488 00:15:46.821 }, 00:15:46.821 { 00:15:46.821 "name": "BaseBdev2", 00:15:46.821 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:15:46.821 "is_configured": true, 00:15:46.821 "data_offset": 2048, 00:15:46.821 "data_size": 63488 00:15:46.821 }, 00:15:46.821 { 00:15:46.821 "name": "BaseBdev3", 00:15:46.821 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:15:46.821 "is_configured": true, 00:15:46.821 "data_offset": 2048, 00:15:46.821 "data_size": 63488 00:15:46.821 }, 00:15:46.821 { 00:15:46.821 "name": "BaseBdev4", 00:15:46.821 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:15:46.821 "is_configured": true, 00:15:46.821 "data_offset": 2048, 00:15:46.821 "data_size": 63488 00:15:46.821 } 00:15:46.821 ] 00:15:46.821 }' 00:15:46.821 04:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.821 04:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.821 04:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.821 04:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.821 04:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:47.762 04:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:47.762 04:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.762 04:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.762 04:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.762 04:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.762 04:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.762 04:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.762 04:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.762 04:13:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.762 04:13:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.762 04:13:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.762 04:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.762 "name": "raid_bdev1", 00:15:47.762 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:15:47.762 "strip_size_kb": 64, 00:15:47.762 "state": "online", 00:15:47.762 "raid_level": "raid5f", 00:15:47.762 "superblock": true, 00:15:47.762 "num_base_bdevs": 4, 00:15:47.762 "num_base_bdevs_discovered": 4, 00:15:47.762 "num_base_bdevs_operational": 4, 00:15:47.762 "process": { 00:15:47.762 "type": "rebuild", 00:15:47.762 "target": "spare", 00:15:47.762 "progress": { 00:15:47.762 "blocks": 65280, 00:15:47.762 "percent": 34 00:15:47.762 } 00:15:47.762 }, 00:15:47.762 "base_bdevs_list": [ 00:15:47.762 { 00:15:47.762 "name": "spare", 00:15:47.762 "uuid": "539e8c41-e369-559a-a008-1f8f8850f2e3", 00:15:47.762 "is_configured": true, 00:15:47.762 "data_offset": 2048, 00:15:47.762 "data_size": 63488 00:15:47.762 }, 00:15:47.762 { 00:15:47.762 "name": "BaseBdev2", 00:15:47.762 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:15:47.762 "is_configured": true, 00:15:47.762 "data_offset": 2048, 00:15:47.762 "data_size": 63488 00:15:47.762 }, 00:15:47.762 { 00:15:47.762 "name": "BaseBdev3", 00:15:47.762 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:15:47.762 "is_configured": true, 00:15:47.762 "data_offset": 2048, 00:15:47.762 "data_size": 63488 00:15:47.762 }, 00:15:47.762 { 00:15:47.762 "name": "BaseBdev4", 00:15:47.762 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:15:47.762 "is_configured": true, 00:15:47.762 "data_offset": 2048, 00:15:47.762 "data_size": 63488 00:15:47.762 } 00:15:47.762 ] 00:15:47.762 }' 00:15:47.762 04:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.762 04:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.762 04:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.762 04:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.762 04:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:48.703 04:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:48.703 04:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.703 04:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.703 04:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.703 04:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.703 04:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.703 04:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.703 04:13:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.703 04:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.703 04:13:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.703 04:13:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.964 04:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.964 "name": "raid_bdev1", 00:15:48.964 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:15:48.964 "strip_size_kb": 64, 00:15:48.964 "state": "online", 00:15:48.964 "raid_level": "raid5f", 00:15:48.964 "superblock": true, 00:15:48.964 "num_base_bdevs": 4, 00:15:48.964 "num_base_bdevs_discovered": 4, 00:15:48.964 "num_base_bdevs_operational": 4, 00:15:48.964 "process": { 00:15:48.964 "type": "rebuild", 00:15:48.964 "target": "spare", 00:15:48.964 "progress": { 00:15:48.964 "blocks": 86400, 00:15:48.964 "percent": 45 00:15:48.964 } 00:15:48.964 }, 00:15:48.964 "base_bdevs_list": [ 00:15:48.964 { 00:15:48.964 "name": "spare", 00:15:48.964 "uuid": "539e8c41-e369-559a-a008-1f8f8850f2e3", 00:15:48.964 "is_configured": true, 00:15:48.964 "data_offset": 2048, 00:15:48.964 "data_size": 63488 00:15:48.964 }, 00:15:48.964 { 00:15:48.964 "name": "BaseBdev2", 00:15:48.964 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:15:48.964 "is_configured": true, 00:15:48.964 "data_offset": 2048, 00:15:48.964 "data_size": 63488 00:15:48.964 }, 00:15:48.964 { 00:15:48.964 "name": "BaseBdev3", 00:15:48.964 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:15:48.964 "is_configured": true, 00:15:48.964 "data_offset": 2048, 00:15:48.964 "data_size": 63488 00:15:48.964 }, 00:15:48.964 { 00:15:48.964 "name": "BaseBdev4", 00:15:48.964 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:15:48.964 "is_configured": true, 00:15:48.964 "data_offset": 2048, 00:15:48.964 "data_size": 63488 00:15:48.964 } 00:15:48.964 ] 00:15:48.964 }' 00:15:48.964 04:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.964 04:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:48.964 04:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.964 04:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.964 04:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:49.904 04:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:49.904 04:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.904 04:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.904 04:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.904 04:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.904 04:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.904 04:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.905 04:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.905 04:13:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.905 04:13:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.905 04:13:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.905 04:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.905 "name": "raid_bdev1", 00:15:49.905 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:15:49.905 "strip_size_kb": 64, 00:15:49.905 "state": "online", 00:15:49.905 "raid_level": "raid5f", 00:15:49.905 "superblock": true, 00:15:49.905 "num_base_bdevs": 4, 00:15:49.905 "num_base_bdevs_discovered": 4, 00:15:49.905 "num_base_bdevs_operational": 4, 00:15:49.905 "process": { 00:15:49.905 "type": "rebuild", 00:15:49.905 "target": "spare", 00:15:49.905 "progress": { 00:15:49.905 "blocks": 107520, 00:15:49.905 "percent": 56 00:15:49.905 } 00:15:49.905 }, 00:15:49.905 "base_bdevs_list": [ 00:15:49.905 { 00:15:49.905 "name": "spare", 00:15:49.905 "uuid": "539e8c41-e369-559a-a008-1f8f8850f2e3", 00:15:49.905 "is_configured": true, 00:15:49.905 "data_offset": 2048, 00:15:49.905 "data_size": 63488 00:15:49.905 }, 00:15:49.905 { 00:15:49.905 "name": "BaseBdev2", 00:15:49.905 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:15:49.905 "is_configured": true, 00:15:49.905 "data_offset": 2048, 00:15:49.905 "data_size": 63488 00:15:49.905 }, 00:15:49.905 { 00:15:49.905 "name": "BaseBdev3", 00:15:49.905 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:15:49.905 "is_configured": true, 00:15:49.905 "data_offset": 2048, 00:15:49.905 "data_size": 63488 00:15:49.905 }, 00:15:49.905 { 00:15:49.905 "name": "BaseBdev4", 00:15:49.905 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:15:49.905 "is_configured": true, 00:15:49.905 "data_offset": 2048, 00:15:49.905 "data_size": 63488 00:15:49.905 } 00:15:49.905 ] 00:15:49.905 }' 00:15:49.905 04:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.905 04:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:50.165 04:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.165 04:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:50.165 04:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:51.162 04:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:51.162 04:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:51.162 04:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.162 04:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:51.162 04:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:51.162 04:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.162 04:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.162 04:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.162 04:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.162 04:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.162 04:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.162 04:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.162 "name": "raid_bdev1", 00:15:51.162 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:15:51.162 "strip_size_kb": 64, 00:15:51.162 "state": "online", 00:15:51.162 "raid_level": "raid5f", 00:15:51.162 "superblock": true, 00:15:51.162 "num_base_bdevs": 4, 00:15:51.162 "num_base_bdevs_discovered": 4, 00:15:51.162 "num_base_bdevs_operational": 4, 00:15:51.162 "process": { 00:15:51.162 "type": "rebuild", 00:15:51.162 "target": "spare", 00:15:51.162 "progress": { 00:15:51.162 "blocks": 130560, 00:15:51.162 "percent": 68 00:15:51.162 } 00:15:51.162 }, 00:15:51.162 "base_bdevs_list": [ 00:15:51.162 { 00:15:51.162 "name": "spare", 00:15:51.162 "uuid": "539e8c41-e369-559a-a008-1f8f8850f2e3", 00:15:51.162 "is_configured": true, 00:15:51.162 "data_offset": 2048, 00:15:51.162 "data_size": 63488 00:15:51.162 }, 00:15:51.162 { 00:15:51.162 "name": "BaseBdev2", 00:15:51.162 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:15:51.162 "is_configured": true, 00:15:51.162 "data_offset": 2048, 00:15:51.162 "data_size": 63488 00:15:51.162 }, 00:15:51.162 { 00:15:51.162 "name": "BaseBdev3", 00:15:51.162 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:15:51.162 "is_configured": true, 00:15:51.162 "data_offset": 2048, 00:15:51.162 "data_size": 63488 00:15:51.162 }, 00:15:51.162 { 00:15:51.162 "name": "BaseBdev4", 00:15:51.162 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:15:51.162 "is_configured": true, 00:15:51.162 "data_offset": 2048, 00:15:51.162 "data_size": 63488 00:15:51.162 } 00:15:51.162 ] 00:15:51.162 }' 00:15:51.162 04:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.162 04:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:51.162 04:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.162 04:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:51.162 04:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:52.104 04:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:52.104 04:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:52.104 04:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.104 04:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:52.104 04:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:52.104 04:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.104 04:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.104 04:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.104 04:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.104 04:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.365 04:13:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.365 04:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.365 "name": "raid_bdev1", 00:15:52.365 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:15:52.365 "strip_size_kb": 64, 00:15:52.365 "state": "online", 00:15:52.365 "raid_level": "raid5f", 00:15:52.365 "superblock": true, 00:15:52.365 "num_base_bdevs": 4, 00:15:52.365 "num_base_bdevs_discovered": 4, 00:15:52.365 "num_base_bdevs_operational": 4, 00:15:52.365 "process": { 00:15:52.365 "type": "rebuild", 00:15:52.365 "target": "spare", 00:15:52.365 "progress": { 00:15:52.365 "blocks": 151680, 00:15:52.365 "percent": 79 00:15:52.365 } 00:15:52.365 }, 00:15:52.365 "base_bdevs_list": [ 00:15:52.365 { 00:15:52.365 "name": "spare", 00:15:52.365 "uuid": "539e8c41-e369-559a-a008-1f8f8850f2e3", 00:15:52.365 "is_configured": true, 00:15:52.365 "data_offset": 2048, 00:15:52.365 "data_size": 63488 00:15:52.365 }, 00:15:52.365 { 00:15:52.365 "name": "BaseBdev2", 00:15:52.365 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:15:52.365 "is_configured": true, 00:15:52.365 "data_offset": 2048, 00:15:52.365 "data_size": 63488 00:15:52.365 }, 00:15:52.365 { 00:15:52.365 "name": "BaseBdev3", 00:15:52.365 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:15:52.365 "is_configured": true, 00:15:52.365 "data_offset": 2048, 00:15:52.365 "data_size": 63488 00:15:52.365 }, 00:15:52.365 { 00:15:52.365 "name": "BaseBdev4", 00:15:52.365 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:15:52.365 "is_configured": true, 00:15:52.365 "data_offset": 2048, 00:15:52.365 "data_size": 63488 00:15:52.365 } 00:15:52.365 ] 00:15:52.365 }' 00:15:52.365 04:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.365 04:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:52.365 04:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.365 04:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:52.365 04:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:53.307 04:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:53.307 04:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:53.307 04:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.307 04:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:53.307 04:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:53.307 04:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.307 04:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.307 04:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.307 04:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.307 04:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.307 04:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.307 04:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.307 "name": "raid_bdev1", 00:15:53.307 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:15:53.307 "strip_size_kb": 64, 00:15:53.307 "state": "online", 00:15:53.307 "raid_level": "raid5f", 00:15:53.307 "superblock": true, 00:15:53.307 "num_base_bdevs": 4, 00:15:53.307 "num_base_bdevs_discovered": 4, 00:15:53.307 "num_base_bdevs_operational": 4, 00:15:53.307 "process": { 00:15:53.307 "type": "rebuild", 00:15:53.307 "target": "spare", 00:15:53.307 "progress": { 00:15:53.307 "blocks": 174720, 00:15:53.307 "percent": 91 00:15:53.307 } 00:15:53.307 }, 00:15:53.307 "base_bdevs_list": [ 00:15:53.307 { 00:15:53.307 "name": "spare", 00:15:53.307 "uuid": "539e8c41-e369-559a-a008-1f8f8850f2e3", 00:15:53.307 "is_configured": true, 00:15:53.307 "data_offset": 2048, 00:15:53.307 "data_size": 63488 00:15:53.307 }, 00:15:53.307 { 00:15:53.307 "name": "BaseBdev2", 00:15:53.307 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:15:53.307 "is_configured": true, 00:15:53.307 "data_offset": 2048, 00:15:53.307 "data_size": 63488 00:15:53.307 }, 00:15:53.307 { 00:15:53.307 "name": "BaseBdev3", 00:15:53.307 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:15:53.307 "is_configured": true, 00:15:53.307 "data_offset": 2048, 00:15:53.307 "data_size": 63488 00:15:53.307 }, 00:15:53.307 { 00:15:53.307 "name": "BaseBdev4", 00:15:53.307 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:15:53.307 "is_configured": true, 00:15:53.307 "data_offset": 2048, 00:15:53.307 "data_size": 63488 00:15:53.307 } 00:15:53.307 ] 00:15:53.307 }' 00:15:53.307 04:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.568 04:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:53.568 04:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.568 04:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:53.568 04:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:54.508 [2024-11-21 04:13:54.111160] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:54.508 [2024-11-21 04:13:54.111251] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:54.508 [2024-11-21 04:13:54.111429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.508 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:54.508 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.508 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.508 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.508 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.508 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.508 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.508 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.508 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.508 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.508 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.508 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.508 "name": "raid_bdev1", 00:15:54.508 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:15:54.508 "strip_size_kb": 64, 00:15:54.508 "state": "online", 00:15:54.508 "raid_level": "raid5f", 00:15:54.508 "superblock": true, 00:15:54.508 "num_base_bdevs": 4, 00:15:54.508 "num_base_bdevs_discovered": 4, 00:15:54.508 "num_base_bdevs_operational": 4, 00:15:54.508 "base_bdevs_list": [ 00:15:54.508 { 00:15:54.508 "name": "spare", 00:15:54.508 "uuid": "539e8c41-e369-559a-a008-1f8f8850f2e3", 00:15:54.508 "is_configured": true, 00:15:54.508 "data_offset": 2048, 00:15:54.508 "data_size": 63488 00:15:54.508 }, 00:15:54.508 { 00:15:54.508 "name": "BaseBdev2", 00:15:54.508 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:15:54.508 "is_configured": true, 00:15:54.508 "data_offset": 2048, 00:15:54.508 "data_size": 63488 00:15:54.508 }, 00:15:54.508 { 00:15:54.508 "name": "BaseBdev3", 00:15:54.508 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:15:54.508 "is_configured": true, 00:15:54.508 "data_offset": 2048, 00:15:54.508 "data_size": 63488 00:15:54.508 }, 00:15:54.508 { 00:15:54.508 "name": "BaseBdev4", 00:15:54.508 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:15:54.508 "is_configured": true, 00:15:54.508 "data_offset": 2048, 00:15:54.508 "data_size": 63488 00:15:54.508 } 00:15:54.508 ] 00:15:54.508 }' 00:15:54.508 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.508 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:54.508 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.769 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:54.769 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:54.769 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:54.769 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.769 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:54.769 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:54.769 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.769 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.770 "name": "raid_bdev1", 00:15:54.770 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:15:54.770 "strip_size_kb": 64, 00:15:54.770 "state": "online", 00:15:54.770 "raid_level": "raid5f", 00:15:54.770 "superblock": true, 00:15:54.770 "num_base_bdevs": 4, 00:15:54.770 "num_base_bdevs_discovered": 4, 00:15:54.770 "num_base_bdevs_operational": 4, 00:15:54.770 "base_bdevs_list": [ 00:15:54.770 { 00:15:54.770 "name": "spare", 00:15:54.770 "uuid": "539e8c41-e369-559a-a008-1f8f8850f2e3", 00:15:54.770 "is_configured": true, 00:15:54.770 "data_offset": 2048, 00:15:54.770 "data_size": 63488 00:15:54.770 }, 00:15:54.770 { 00:15:54.770 "name": "BaseBdev2", 00:15:54.770 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:15:54.770 "is_configured": true, 00:15:54.770 "data_offset": 2048, 00:15:54.770 "data_size": 63488 00:15:54.770 }, 00:15:54.770 { 00:15:54.770 "name": "BaseBdev3", 00:15:54.770 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:15:54.770 "is_configured": true, 00:15:54.770 "data_offset": 2048, 00:15:54.770 "data_size": 63488 00:15:54.770 }, 00:15:54.770 { 00:15:54.770 "name": "BaseBdev4", 00:15:54.770 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:15:54.770 "is_configured": true, 00:15:54.770 "data_offset": 2048, 00:15:54.770 "data_size": 63488 00:15:54.770 } 00:15:54.770 ] 00:15:54.770 }' 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.770 "name": "raid_bdev1", 00:15:54.770 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:15:54.770 "strip_size_kb": 64, 00:15:54.770 "state": "online", 00:15:54.770 "raid_level": "raid5f", 00:15:54.770 "superblock": true, 00:15:54.770 "num_base_bdevs": 4, 00:15:54.770 "num_base_bdevs_discovered": 4, 00:15:54.770 "num_base_bdevs_operational": 4, 00:15:54.770 "base_bdevs_list": [ 00:15:54.770 { 00:15:54.770 "name": "spare", 00:15:54.770 "uuid": "539e8c41-e369-559a-a008-1f8f8850f2e3", 00:15:54.770 "is_configured": true, 00:15:54.770 "data_offset": 2048, 00:15:54.770 "data_size": 63488 00:15:54.770 }, 00:15:54.770 { 00:15:54.770 "name": "BaseBdev2", 00:15:54.770 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:15:54.770 "is_configured": true, 00:15:54.770 "data_offset": 2048, 00:15:54.770 "data_size": 63488 00:15:54.770 }, 00:15:54.770 { 00:15:54.770 "name": "BaseBdev3", 00:15:54.770 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:15:54.770 "is_configured": true, 00:15:54.770 "data_offset": 2048, 00:15:54.770 "data_size": 63488 00:15:54.770 }, 00:15:54.770 { 00:15:54.770 "name": "BaseBdev4", 00:15:54.770 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:15:54.770 "is_configured": true, 00:15:54.770 "data_offset": 2048, 00:15:54.770 "data_size": 63488 00:15:54.770 } 00:15:54.770 ] 00:15:54.770 }' 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.770 04:13:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.341 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:55.341 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.341 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.341 [2024-11-21 04:13:55.164365] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:55.341 [2024-11-21 04:13:55.164401] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:55.341 [2024-11-21 04:13:55.164502] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:55.341 [2024-11-21 04:13:55.164673] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:55.341 [2024-11-21 04:13:55.164693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:55.341 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.341 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.341 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:55.341 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.341 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.341 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.341 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:55.341 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:55.341 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:55.341 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:55.341 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:55.341 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:55.341 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:55.341 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:55.341 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:55.341 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:55.341 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:55.341 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:55.341 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:55.602 /dev/nbd0 00:15:55.602 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:55.602 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:55.602 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:55.602 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:55.602 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:55.602 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:55.602 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:55.602 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:55.602 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:55.602 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:55.602 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:55.602 1+0 records in 00:15:55.602 1+0 records out 00:15:55.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046751 s, 8.8 MB/s 00:15:55.602 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.602 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:55.602 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.602 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:55.602 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:55.602 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:55.602 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:55.602 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:55.862 /dev/nbd1 00:15:55.862 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:55.862 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:55.862 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:55.862 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:55.862 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:55.862 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:55.862 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:55.862 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:55.862 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:55.862 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:55.862 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:55.862 1+0 records in 00:15:55.862 1+0 records out 00:15:55.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430326 s, 9.5 MB/s 00:15:55.862 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.862 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:55.862 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.862 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:55.862 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:55.862 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:55.862 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:55.862 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:55.862 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:55.862 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:55.862 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:55.862 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:55.862 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:55.862 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:55.862 04:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:56.122 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:56.122 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:56.122 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:56.122 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:56.122 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:56.122 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:56.122 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:56.122 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:56.122 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:56.122 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:56.383 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:56.383 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:56.383 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:56.383 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:56.383 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:56.383 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:56.383 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:56.383 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:56.383 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:56.383 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:56.383 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.383 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.383 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.383 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:56.383 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.383 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.383 [2024-11-21 04:13:56.249319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:56.383 [2024-11-21 04:13:56.249383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.383 [2024-11-21 04:13:56.249409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:56.383 [2024-11-21 04:13:56.249422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.383 [2024-11-21 04:13:56.251896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.383 [2024-11-21 04:13:56.251939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:56.383 [2024-11-21 04:13:56.252023] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:56.383 [2024-11-21 04:13:56.252086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:56.383 [2024-11-21 04:13:56.252210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:56.383 [2024-11-21 04:13:56.252362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:56.383 [2024-11-21 04:13:56.252461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:56.383 spare 00:15:56.383 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.383 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:56.383 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.383 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.383 [2024-11-21 04:13:56.352384] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:15:56.383 [2024-11-21 04:13:56.352410] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:56.383 [2024-11-21 04:13:56.352684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000045820 00:15:56.383 [2024-11-21 04:13:56.353204] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:15:56.383 [2024-11-21 04:13:56.353238] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:15:56.383 [2024-11-21 04:13:56.353428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.643 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.643 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:56.643 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.643 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.643 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.643 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.643 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.643 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.643 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.643 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.643 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.643 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.643 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.643 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.643 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.643 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.643 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.643 "name": "raid_bdev1", 00:15:56.643 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:15:56.643 "strip_size_kb": 64, 00:15:56.643 "state": "online", 00:15:56.643 "raid_level": "raid5f", 00:15:56.643 "superblock": true, 00:15:56.643 "num_base_bdevs": 4, 00:15:56.643 "num_base_bdevs_discovered": 4, 00:15:56.643 "num_base_bdevs_operational": 4, 00:15:56.643 "base_bdevs_list": [ 00:15:56.643 { 00:15:56.643 "name": "spare", 00:15:56.643 "uuid": "539e8c41-e369-559a-a008-1f8f8850f2e3", 00:15:56.643 "is_configured": true, 00:15:56.643 "data_offset": 2048, 00:15:56.643 "data_size": 63488 00:15:56.643 }, 00:15:56.643 { 00:15:56.643 "name": "BaseBdev2", 00:15:56.643 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:15:56.643 "is_configured": true, 00:15:56.643 "data_offset": 2048, 00:15:56.643 "data_size": 63488 00:15:56.643 }, 00:15:56.643 { 00:15:56.643 "name": "BaseBdev3", 00:15:56.643 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:15:56.643 "is_configured": true, 00:15:56.643 "data_offset": 2048, 00:15:56.643 "data_size": 63488 00:15:56.643 }, 00:15:56.643 { 00:15:56.643 "name": "BaseBdev4", 00:15:56.643 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:15:56.643 "is_configured": true, 00:15:56.643 "data_offset": 2048, 00:15:56.643 "data_size": 63488 00:15:56.643 } 00:15:56.643 ] 00:15:56.643 }' 00:15:56.643 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.643 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.904 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:56.904 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.904 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:56.904 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:56.904 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.904 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.904 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.904 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.904 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.165 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.165 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.165 "name": "raid_bdev1", 00:15:57.165 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:15:57.165 "strip_size_kb": 64, 00:15:57.165 "state": "online", 00:15:57.165 "raid_level": "raid5f", 00:15:57.165 "superblock": true, 00:15:57.165 "num_base_bdevs": 4, 00:15:57.165 "num_base_bdevs_discovered": 4, 00:15:57.165 "num_base_bdevs_operational": 4, 00:15:57.165 "base_bdevs_list": [ 00:15:57.165 { 00:15:57.165 "name": "spare", 00:15:57.165 "uuid": "539e8c41-e369-559a-a008-1f8f8850f2e3", 00:15:57.165 "is_configured": true, 00:15:57.165 "data_offset": 2048, 00:15:57.165 "data_size": 63488 00:15:57.165 }, 00:15:57.165 { 00:15:57.165 "name": "BaseBdev2", 00:15:57.165 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:15:57.165 "is_configured": true, 00:15:57.165 "data_offset": 2048, 00:15:57.165 "data_size": 63488 00:15:57.165 }, 00:15:57.165 { 00:15:57.165 "name": "BaseBdev3", 00:15:57.165 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:15:57.165 "is_configured": true, 00:15:57.165 "data_offset": 2048, 00:15:57.165 "data_size": 63488 00:15:57.165 }, 00:15:57.165 { 00:15:57.165 "name": "BaseBdev4", 00:15:57.165 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:15:57.165 "is_configured": true, 00:15:57.165 "data_offset": 2048, 00:15:57.165 "data_size": 63488 00:15:57.165 } 00:15:57.165 ] 00:15:57.165 }' 00:15:57.165 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.165 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:57.165 04:13:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.165 [2024-11-21 04:13:57.056349] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.165 "name": "raid_bdev1", 00:15:57.165 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:15:57.165 "strip_size_kb": 64, 00:15:57.165 "state": "online", 00:15:57.165 "raid_level": "raid5f", 00:15:57.165 "superblock": true, 00:15:57.165 "num_base_bdevs": 4, 00:15:57.165 "num_base_bdevs_discovered": 3, 00:15:57.165 "num_base_bdevs_operational": 3, 00:15:57.165 "base_bdevs_list": [ 00:15:57.165 { 00:15:57.165 "name": null, 00:15:57.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.165 "is_configured": false, 00:15:57.165 "data_offset": 0, 00:15:57.165 "data_size": 63488 00:15:57.165 }, 00:15:57.165 { 00:15:57.165 "name": "BaseBdev2", 00:15:57.165 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:15:57.165 "is_configured": true, 00:15:57.165 "data_offset": 2048, 00:15:57.165 "data_size": 63488 00:15:57.165 }, 00:15:57.165 { 00:15:57.165 "name": "BaseBdev3", 00:15:57.165 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:15:57.165 "is_configured": true, 00:15:57.165 "data_offset": 2048, 00:15:57.165 "data_size": 63488 00:15:57.165 }, 00:15:57.165 { 00:15:57.165 "name": "BaseBdev4", 00:15:57.165 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:15:57.165 "is_configured": true, 00:15:57.165 "data_offset": 2048, 00:15:57.165 "data_size": 63488 00:15:57.165 } 00:15:57.165 ] 00:15:57.165 }' 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.165 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.736 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:57.736 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.736 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.736 [2024-11-21 04:13:57.544381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:57.736 [2024-11-21 04:13:57.544575] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:57.736 [2024-11-21 04:13:57.544597] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:57.736 [2024-11-21 04:13:57.544677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:57.736 [2024-11-21 04:13:57.551738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000458f0 00:15:57.736 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.736 04:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:57.736 [2024-11-21 04:13:57.554345] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:58.676 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.676 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.676 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.676 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.676 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.676 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.676 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.676 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.676 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.676 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.676 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.676 "name": "raid_bdev1", 00:15:58.676 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:15:58.676 "strip_size_kb": 64, 00:15:58.676 "state": "online", 00:15:58.676 "raid_level": "raid5f", 00:15:58.676 "superblock": true, 00:15:58.676 "num_base_bdevs": 4, 00:15:58.676 "num_base_bdevs_discovered": 4, 00:15:58.676 "num_base_bdevs_operational": 4, 00:15:58.676 "process": { 00:15:58.676 "type": "rebuild", 00:15:58.676 "target": "spare", 00:15:58.676 "progress": { 00:15:58.676 "blocks": 19200, 00:15:58.676 "percent": 10 00:15:58.676 } 00:15:58.676 }, 00:15:58.676 "base_bdevs_list": [ 00:15:58.676 { 00:15:58.676 "name": "spare", 00:15:58.676 "uuid": "539e8c41-e369-559a-a008-1f8f8850f2e3", 00:15:58.676 "is_configured": true, 00:15:58.676 "data_offset": 2048, 00:15:58.676 "data_size": 63488 00:15:58.676 }, 00:15:58.676 { 00:15:58.676 "name": "BaseBdev2", 00:15:58.676 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:15:58.676 "is_configured": true, 00:15:58.676 "data_offset": 2048, 00:15:58.676 "data_size": 63488 00:15:58.676 }, 00:15:58.676 { 00:15:58.676 "name": "BaseBdev3", 00:15:58.676 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:15:58.676 "is_configured": true, 00:15:58.676 "data_offset": 2048, 00:15:58.676 "data_size": 63488 00:15:58.676 }, 00:15:58.676 { 00:15:58.676 "name": "BaseBdev4", 00:15:58.676 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:15:58.676 "is_configured": true, 00:15:58.676 "data_offset": 2048, 00:15:58.676 "data_size": 63488 00:15:58.676 } 00:15:58.676 ] 00:15:58.676 }' 00:15:58.676 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.937 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.937 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.937 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.937 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:58.937 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.937 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.937 [2024-11-21 04:13:58.709745] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:58.937 [2024-11-21 04:13:58.760789] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:58.937 [2024-11-21 04:13:58.760863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.937 [2024-11-21 04:13:58.760885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:58.937 [2024-11-21 04:13:58.760892] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:58.937 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.937 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:58.937 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.937 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.937 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.937 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.937 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:58.937 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.937 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.937 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.937 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.937 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.937 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.937 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.937 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.937 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.937 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.937 "name": "raid_bdev1", 00:15:58.937 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:15:58.937 "strip_size_kb": 64, 00:15:58.937 "state": "online", 00:15:58.937 "raid_level": "raid5f", 00:15:58.937 "superblock": true, 00:15:58.937 "num_base_bdevs": 4, 00:15:58.937 "num_base_bdevs_discovered": 3, 00:15:58.937 "num_base_bdevs_operational": 3, 00:15:58.937 "base_bdevs_list": [ 00:15:58.937 { 00:15:58.937 "name": null, 00:15:58.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.937 "is_configured": false, 00:15:58.937 "data_offset": 0, 00:15:58.937 "data_size": 63488 00:15:58.937 }, 00:15:58.937 { 00:15:58.937 "name": "BaseBdev2", 00:15:58.937 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:15:58.937 "is_configured": true, 00:15:58.937 "data_offset": 2048, 00:15:58.937 "data_size": 63488 00:15:58.937 }, 00:15:58.937 { 00:15:58.937 "name": "BaseBdev3", 00:15:58.937 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:15:58.937 "is_configured": true, 00:15:58.937 "data_offset": 2048, 00:15:58.937 "data_size": 63488 00:15:58.937 }, 00:15:58.937 { 00:15:58.937 "name": "BaseBdev4", 00:15:58.937 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:15:58.937 "is_configured": true, 00:15:58.937 "data_offset": 2048, 00:15:58.937 "data_size": 63488 00:15:58.937 } 00:15:58.937 ] 00:15:58.937 }' 00:15:58.937 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.937 04:13:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.507 04:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:59.507 04:13:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.507 04:13:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.507 [2024-11-21 04:13:59.237319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:59.507 [2024-11-21 04:13:59.237376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.507 [2024-11-21 04:13:59.237410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:59.507 [2024-11-21 04:13:59.237433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.507 [2024-11-21 04:13:59.237947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.507 [2024-11-21 04:13:59.237972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:59.507 [2024-11-21 04:13:59.238064] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:59.507 [2024-11-21 04:13:59.238086] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:59.507 [2024-11-21 04:13:59.238124] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:59.507 [2024-11-21 04:13:59.238155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:59.507 [2024-11-21 04:13:59.244276] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000459c0 00:15:59.507 spare 00:15:59.507 04:13:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.507 04:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:59.507 [2024-11-21 04:13:59.246781] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:00.445 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.445 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.445 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.445 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.445 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.445 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.445 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.445 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.445 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.445 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.445 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.445 "name": "raid_bdev1", 00:16:00.445 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:16:00.445 "strip_size_kb": 64, 00:16:00.445 "state": "online", 00:16:00.445 "raid_level": "raid5f", 00:16:00.445 "superblock": true, 00:16:00.445 "num_base_bdevs": 4, 00:16:00.445 "num_base_bdevs_discovered": 4, 00:16:00.445 "num_base_bdevs_operational": 4, 00:16:00.445 "process": { 00:16:00.445 "type": "rebuild", 00:16:00.445 "target": "spare", 00:16:00.445 "progress": { 00:16:00.445 "blocks": 19200, 00:16:00.445 "percent": 10 00:16:00.445 } 00:16:00.445 }, 00:16:00.445 "base_bdevs_list": [ 00:16:00.445 { 00:16:00.445 "name": "spare", 00:16:00.445 "uuid": "539e8c41-e369-559a-a008-1f8f8850f2e3", 00:16:00.445 "is_configured": true, 00:16:00.445 "data_offset": 2048, 00:16:00.445 "data_size": 63488 00:16:00.445 }, 00:16:00.445 { 00:16:00.445 "name": "BaseBdev2", 00:16:00.445 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:16:00.445 "is_configured": true, 00:16:00.445 "data_offset": 2048, 00:16:00.445 "data_size": 63488 00:16:00.445 }, 00:16:00.445 { 00:16:00.445 "name": "BaseBdev3", 00:16:00.445 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:16:00.445 "is_configured": true, 00:16:00.445 "data_offset": 2048, 00:16:00.445 "data_size": 63488 00:16:00.445 }, 00:16:00.445 { 00:16:00.445 "name": "BaseBdev4", 00:16:00.445 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:16:00.445 "is_configured": true, 00:16:00.445 "data_offset": 2048, 00:16:00.445 "data_size": 63488 00:16:00.445 } 00:16:00.445 ] 00:16:00.445 }' 00:16:00.445 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.445 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.446 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.446 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.446 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:00.446 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.446 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.446 [2024-11-21 04:14:00.386130] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:00.706 [2024-11-21 04:14:00.453091] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:00.706 [2024-11-21 04:14:00.453151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.706 [2024-11-21 04:14:00.453167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:00.706 [2024-11-21 04:14:00.453179] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:00.706 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.706 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:00.706 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.706 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.706 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.706 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.706 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:00.706 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.706 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.706 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.706 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.706 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.706 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.706 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.706 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.706 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.706 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.706 "name": "raid_bdev1", 00:16:00.706 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:16:00.706 "strip_size_kb": 64, 00:16:00.706 "state": "online", 00:16:00.706 "raid_level": "raid5f", 00:16:00.706 "superblock": true, 00:16:00.706 "num_base_bdevs": 4, 00:16:00.706 "num_base_bdevs_discovered": 3, 00:16:00.706 "num_base_bdevs_operational": 3, 00:16:00.706 "base_bdevs_list": [ 00:16:00.706 { 00:16:00.706 "name": null, 00:16:00.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.706 "is_configured": false, 00:16:00.706 "data_offset": 0, 00:16:00.706 "data_size": 63488 00:16:00.706 }, 00:16:00.706 { 00:16:00.706 "name": "BaseBdev2", 00:16:00.706 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:16:00.706 "is_configured": true, 00:16:00.706 "data_offset": 2048, 00:16:00.706 "data_size": 63488 00:16:00.706 }, 00:16:00.706 { 00:16:00.706 "name": "BaseBdev3", 00:16:00.706 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:16:00.706 "is_configured": true, 00:16:00.706 "data_offset": 2048, 00:16:00.706 "data_size": 63488 00:16:00.706 }, 00:16:00.706 { 00:16:00.706 "name": "BaseBdev4", 00:16:00.706 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:16:00.706 "is_configured": true, 00:16:00.706 "data_offset": 2048, 00:16:00.706 "data_size": 63488 00:16:00.706 } 00:16:00.706 ] 00:16:00.706 }' 00:16:00.706 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.706 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.966 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:00.966 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.966 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:00.966 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:00.966 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.966 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.966 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.966 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.966 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.966 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.966 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.966 "name": "raid_bdev1", 00:16:00.966 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:16:00.966 "strip_size_kb": 64, 00:16:00.966 "state": "online", 00:16:00.966 "raid_level": "raid5f", 00:16:00.966 "superblock": true, 00:16:00.966 "num_base_bdevs": 4, 00:16:00.966 "num_base_bdevs_discovered": 3, 00:16:00.966 "num_base_bdevs_operational": 3, 00:16:00.966 "base_bdevs_list": [ 00:16:00.966 { 00:16:00.966 "name": null, 00:16:00.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.966 "is_configured": false, 00:16:00.966 "data_offset": 0, 00:16:00.966 "data_size": 63488 00:16:00.966 }, 00:16:00.966 { 00:16:00.966 "name": "BaseBdev2", 00:16:00.966 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:16:00.966 "is_configured": true, 00:16:00.966 "data_offset": 2048, 00:16:00.966 "data_size": 63488 00:16:00.966 }, 00:16:00.966 { 00:16:00.966 "name": "BaseBdev3", 00:16:00.966 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:16:00.966 "is_configured": true, 00:16:00.966 "data_offset": 2048, 00:16:00.966 "data_size": 63488 00:16:00.966 }, 00:16:00.967 { 00:16:00.967 "name": "BaseBdev4", 00:16:00.967 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:16:00.967 "is_configured": true, 00:16:00.967 "data_offset": 2048, 00:16:00.967 "data_size": 63488 00:16:00.967 } 00:16:00.967 ] 00:16:00.967 }' 00:16:00.967 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.967 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:00.967 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.226 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:01.226 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:01.226 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.226 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.226 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.226 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:01.226 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.226 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.226 [2024-11-21 04:14:00.981401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:01.226 [2024-11-21 04:14:00.981480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.226 [2024-11-21 04:14:00.981503] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:01.226 [2024-11-21 04:14:00.981514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.226 [2024-11-21 04:14:00.981962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.226 [2024-11-21 04:14:00.981986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:01.226 [2024-11-21 04:14:00.982058] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:01.226 [2024-11-21 04:14:00.982084] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:01.226 [2024-11-21 04:14:00.982102] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:01.227 [2024-11-21 04:14:00.982125] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:01.227 BaseBdev1 00:16:01.227 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.227 04:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:02.166 04:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:02.166 04:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.166 04:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.166 04:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.166 04:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.166 04:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:02.166 04:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.166 04:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.166 04:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.166 04:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.166 04:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.166 04:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.166 04:14:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.166 04:14:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.166 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.166 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.166 "name": "raid_bdev1", 00:16:02.166 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:16:02.166 "strip_size_kb": 64, 00:16:02.166 "state": "online", 00:16:02.166 "raid_level": "raid5f", 00:16:02.166 "superblock": true, 00:16:02.166 "num_base_bdevs": 4, 00:16:02.166 "num_base_bdevs_discovered": 3, 00:16:02.166 "num_base_bdevs_operational": 3, 00:16:02.166 "base_bdevs_list": [ 00:16:02.166 { 00:16:02.166 "name": null, 00:16:02.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.166 "is_configured": false, 00:16:02.166 "data_offset": 0, 00:16:02.166 "data_size": 63488 00:16:02.166 }, 00:16:02.166 { 00:16:02.166 "name": "BaseBdev2", 00:16:02.166 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:16:02.166 "is_configured": true, 00:16:02.166 "data_offset": 2048, 00:16:02.166 "data_size": 63488 00:16:02.166 }, 00:16:02.166 { 00:16:02.166 "name": "BaseBdev3", 00:16:02.166 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:16:02.166 "is_configured": true, 00:16:02.166 "data_offset": 2048, 00:16:02.166 "data_size": 63488 00:16:02.166 }, 00:16:02.166 { 00:16:02.166 "name": "BaseBdev4", 00:16:02.166 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:16:02.166 "is_configured": true, 00:16:02.166 "data_offset": 2048, 00:16:02.166 "data_size": 63488 00:16:02.166 } 00:16:02.166 ] 00:16:02.166 }' 00:16:02.166 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.166 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.737 "name": "raid_bdev1", 00:16:02.737 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:16:02.737 "strip_size_kb": 64, 00:16:02.737 "state": "online", 00:16:02.737 "raid_level": "raid5f", 00:16:02.737 "superblock": true, 00:16:02.737 "num_base_bdevs": 4, 00:16:02.737 "num_base_bdevs_discovered": 3, 00:16:02.737 "num_base_bdevs_operational": 3, 00:16:02.737 "base_bdevs_list": [ 00:16:02.737 { 00:16:02.737 "name": null, 00:16:02.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.737 "is_configured": false, 00:16:02.737 "data_offset": 0, 00:16:02.737 "data_size": 63488 00:16:02.737 }, 00:16:02.737 { 00:16:02.737 "name": "BaseBdev2", 00:16:02.737 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:16:02.737 "is_configured": true, 00:16:02.737 "data_offset": 2048, 00:16:02.737 "data_size": 63488 00:16:02.737 }, 00:16:02.737 { 00:16:02.737 "name": "BaseBdev3", 00:16:02.737 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:16:02.737 "is_configured": true, 00:16:02.737 "data_offset": 2048, 00:16:02.737 "data_size": 63488 00:16:02.737 }, 00:16:02.737 { 00:16:02.737 "name": "BaseBdev4", 00:16:02.737 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:16:02.737 "is_configured": true, 00:16:02.737 "data_offset": 2048, 00:16:02.737 "data_size": 63488 00:16:02.737 } 00:16:02.737 ] 00:16:02.737 }' 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.737 [2024-11-21 04:14:02.608353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:02.737 [2024-11-21 04:14:02.608517] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:02.737 [2024-11-21 04:14:02.608529] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:02.737 request: 00:16:02.737 { 00:16:02.737 "base_bdev": "BaseBdev1", 00:16:02.737 "raid_bdev": "raid_bdev1", 00:16:02.737 "method": "bdev_raid_add_base_bdev", 00:16:02.737 "req_id": 1 00:16:02.737 } 00:16:02.737 Got JSON-RPC error response 00:16:02.737 response: 00:16:02.737 { 00:16:02.737 "code": -22, 00:16:02.737 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:02.737 } 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:02.737 04:14:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:03.678 04:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:03.678 04:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.678 04:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.678 04:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.678 04:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.678 04:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:03.678 04:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.678 04:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.678 04:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.678 04:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.678 04:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.678 04:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.678 04:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.678 04:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.678 04:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.937 04:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.937 "name": "raid_bdev1", 00:16:03.938 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:16:03.938 "strip_size_kb": 64, 00:16:03.938 "state": "online", 00:16:03.938 "raid_level": "raid5f", 00:16:03.938 "superblock": true, 00:16:03.938 "num_base_bdevs": 4, 00:16:03.938 "num_base_bdevs_discovered": 3, 00:16:03.938 "num_base_bdevs_operational": 3, 00:16:03.938 "base_bdevs_list": [ 00:16:03.938 { 00:16:03.938 "name": null, 00:16:03.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.938 "is_configured": false, 00:16:03.938 "data_offset": 0, 00:16:03.938 "data_size": 63488 00:16:03.938 }, 00:16:03.938 { 00:16:03.938 "name": "BaseBdev2", 00:16:03.938 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:16:03.938 "is_configured": true, 00:16:03.938 "data_offset": 2048, 00:16:03.938 "data_size": 63488 00:16:03.938 }, 00:16:03.938 { 00:16:03.938 "name": "BaseBdev3", 00:16:03.938 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:16:03.938 "is_configured": true, 00:16:03.938 "data_offset": 2048, 00:16:03.938 "data_size": 63488 00:16:03.938 }, 00:16:03.938 { 00:16:03.938 "name": "BaseBdev4", 00:16:03.938 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:16:03.938 "is_configured": true, 00:16:03.938 "data_offset": 2048, 00:16:03.938 "data_size": 63488 00:16:03.938 } 00:16:03.938 ] 00:16:03.938 }' 00:16:03.938 04:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.938 04:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.196 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:04.196 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.196 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:04.196 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:04.196 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.196 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.196 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.196 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.196 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.196 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.196 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.196 "name": "raid_bdev1", 00:16:04.196 "uuid": "eaea3d47-3db4-43d8-96c3-0a1162c2a8ee", 00:16:04.196 "strip_size_kb": 64, 00:16:04.197 "state": "online", 00:16:04.197 "raid_level": "raid5f", 00:16:04.197 "superblock": true, 00:16:04.197 "num_base_bdevs": 4, 00:16:04.197 "num_base_bdevs_discovered": 3, 00:16:04.197 "num_base_bdevs_operational": 3, 00:16:04.197 "base_bdevs_list": [ 00:16:04.197 { 00:16:04.197 "name": null, 00:16:04.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.197 "is_configured": false, 00:16:04.197 "data_offset": 0, 00:16:04.197 "data_size": 63488 00:16:04.197 }, 00:16:04.197 { 00:16:04.197 "name": "BaseBdev2", 00:16:04.197 "uuid": "844356f8-42fb-5f24-ab05-8ca6d38e7b4c", 00:16:04.197 "is_configured": true, 00:16:04.197 "data_offset": 2048, 00:16:04.197 "data_size": 63488 00:16:04.197 }, 00:16:04.197 { 00:16:04.197 "name": "BaseBdev3", 00:16:04.197 "uuid": "e02b1dc5-889c-5e52-875a-97e09c581538", 00:16:04.197 "is_configured": true, 00:16:04.197 "data_offset": 2048, 00:16:04.197 "data_size": 63488 00:16:04.197 }, 00:16:04.197 { 00:16:04.197 "name": "BaseBdev4", 00:16:04.197 "uuid": "369e6f58-2617-54be-b520-fed72b8a28ec", 00:16:04.197 "is_configured": true, 00:16:04.197 "data_offset": 2048, 00:16:04.197 "data_size": 63488 00:16:04.197 } 00:16:04.197 ] 00:16:04.197 }' 00:16:04.197 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.457 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:04.457 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.457 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:04.457 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 95562 00:16:04.457 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 95562 ']' 00:16:04.457 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 95562 00:16:04.457 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:04.457 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:04.457 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95562 00:16:04.457 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:04.457 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:04.457 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95562' 00:16:04.457 killing process with pid 95562 00:16:04.457 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 95562 00:16:04.457 Received shutdown signal, test time was about 60.000000 seconds 00:16:04.457 00:16:04.457 Latency(us) 00:16:04.457 [2024-11-21T04:14:04.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.457 [2024-11-21T04:14:04.430Z] =================================================================================================================== 00:16:04.457 [2024-11-21T04:14:04.430Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:04.457 [2024-11-21 04:14:04.307178] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:04.457 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 95562 00:16:04.457 [2024-11-21 04:14:04.307330] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:04.457 [2024-11-21 04:14:04.307415] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:04.457 [2024-11-21 04:14:04.307424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:16:04.457 [2024-11-21 04:14:04.400485] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:05.027 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:05.027 00:16:05.027 real 0m25.503s 00:16:05.027 user 0m32.390s 00:16:05.027 sys 0m3.104s 00:16:05.027 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.027 04:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.027 ************************************ 00:16:05.027 END TEST raid5f_rebuild_test_sb 00:16:05.027 ************************************ 00:16:05.027 04:14:04 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:16:05.027 04:14:04 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:16:05.027 04:14:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:05.027 04:14:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:05.027 04:14:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:05.027 ************************************ 00:16:05.027 START TEST raid_state_function_test_sb_4k 00:16:05.027 ************************************ 00:16:05.027 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:05.027 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:05.027 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:05.027 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:05.027 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:05.027 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:05.027 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:05.027 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:05.027 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:05.027 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:05.027 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:05.027 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:05.028 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:05.028 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:05.028 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:05.028 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:05.028 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:05.028 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:05.028 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:05.028 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:05.028 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:05.028 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:05.028 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:05.028 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=96364 00:16:05.028 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:05.028 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 96364' 00:16:05.028 Process raid pid: 96364 00:16:05.028 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 96364 00:16:05.028 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 96364 ']' 00:16:05.028 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.028 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.028 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.028 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.028 04:14:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.028 [2024-11-21 04:14:04.889450] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:16:05.028 [2024-11-21 04:14:04.889578] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.288 [2024-11-21 04:14:05.047274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.288 [2024-11-21 04:14:05.089189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.288 [2024-11-21 04:14:05.168425] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:05.288 [2024-11-21 04:14:05.168460] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:05.859 04:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:05.859 04:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:16:05.859 04:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:05.859 04:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.859 04:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.859 [2024-11-21 04:14:05.721392] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:05.859 [2024-11-21 04:14:05.721447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:05.859 [2024-11-21 04:14:05.721481] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:05.859 [2024-11-21 04:14:05.721492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:05.859 04:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.859 04:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:05.859 04:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.859 04:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.859 04:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.859 04:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.859 04:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:05.859 04:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.859 04:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.859 04:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.859 04:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.859 04:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.859 04:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.859 04:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.859 04:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.859 04:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.859 04:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.859 "name": "Existed_Raid", 00:16:05.859 "uuid": "d7987ae9-f383-4b7d-b66a-db0b67c5f0d9", 00:16:05.859 "strip_size_kb": 0, 00:16:05.859 "state": "configuring", 00:16:05.859 "raid_level": "raid1", 00:16:05.859 "superblock": true, 00:16:05.859 "num_base_bdevs": 2, 00:16:05.859 "num_base_bdevs_discovered": 0, 00:16:05.859 "num_base_bdevs_operational": 2, 00:16:05.859 "base_bdevs_list": [ 00:16:05.859 { 00:16:05.859 "name": "BaseBdev1", 00:16:05.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.859 "is_configured": false, 00:16:05.859 "data_offset": 0, 00:16:05.859 "data_size": 0 00:16:05.859 }, 00:16:05.859 { 00:16:05.859 "name": "BaseBdev2", 00:16:05.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.859 "is_configured": false, 00:16:05.859 "data_offset": 0, 00:16:05.859 "data_size": 0 00:16:05.859 } 00:16:05.859 ] 00:16:05.859 }' 00:16:05.859 04:14:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.859 04:14:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.430 [2024-11-21 04:14:06.184473] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:06.430 [2024-11-21 04:14:06.184571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.430 [2024-11-21 04:14:06.196476] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:06.430 [2024-11-21 04:14:06.196563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:06.430 [2024-11-21 04:14:06.196589] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:06.430 [2024-11-21 04:14:06.196625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.430 [2024-11-21 04:14:06.223979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:06.430 BaseBdev1 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.430 [ 00:16:06.430 { 00:16:06.430 "name": "BaseBdev1", 00:16:06.430 "aliases": [ 00:16:06.430 "2d5672ec-e838-4db2-95e4-7303cfa48f1b" 00:16:06.430 ], 00:16:06.430 "product_name": "Malloc disk", 00:16:06.430 "block_size": 4096, 00:16:06.430 "num_blocks": 8192, 00:16:06.430 "uuid": "2d5672ec-e838-4db2-95e4-7303cfa48f1b", 00:16:06.430 "assigned_rate_limits": { 00:16:06.430 "rw_ios_per_sec": 0, 00:16:06.430 "rw_mbytes_per_sec": 0, 00:16:06.430 "r_mbytes_per_sec": 0, 00:16:06.430 "w_mbytes_per_sec": 0 00:16:06.430 }, 00:16:06.430 "claimed": true, 00:16:06.430 "claim_type": "exclusive_write", 00:16:06.430 "zoned": false, 00:16:06.430 "supported_io_types": { 00:16:06.430 "read": true, 00:16:06.430 "write": true, 00:16:06.430 "unmap": true, 00:16:06.430 "flush": true, 00:16:06.430 "reset": true, 00:16:06.430 "nvme_admin": false, 00:16:06.430 "nvme_io": false, 00:16:06.430 "nvme_io_md": false, 00:16:06.430 "write_zeroes": true, 00:16:06.430 "zcopy": true, 00:16:06.430 "get_zone_info": false, 00:16:06.430 "zone_management": false, 00:16:06.430 "zone_append": false, 00:16:06.430 "compare": false, 00:16:06.430 "compare_and_write": false, 00:16:06.430 "abort": true, 00:16:06.430 "seek_hole": false, 00:16:06.430 "seek_data": false, 00:16:06.430 "copy": true, 00:16:06.430 "nvme_iov_md": false 00:16:06.430 }, 00:16:06.430 "memory_domains": [ 00:16:06.430 { 00:16:06.430 "dma_device_id": "system", 00:16:06.430 "dma_device_type": 1 00:16:06.430 }, 00:16:06.430 { 00:16:06.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.430 "dma_device_type": 2 00:16:06.430 } 00:16:06.430 ], 00:16:06.430 "driver_specific": {} 00:16:06.430 } 00:16:06.430 ] 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.430 "name": "Existed_Raid", 00:16:06.430 "uuid": "c63195c4-ef51-45fd-b253-2ff873e5f5fc", 00:16:06.430 "strip_size_kb": 0, 00:16:06.430 "state": "configuring", 00:16:06.430 "raid_level": "raid1", 00:16:06.430 "superblock": true, 00:16:06.430 "num_base_bdevs": 2, 00:16:06.430 "num_base_bdevs_discovered": 1, 00:16:06.430 "num_base_bdevs_operational": 2, 00:16:06.430 "base_bdevs_list": [ 00:16:06.430 { 00:16:06.430 "name": "BaseBdev1", 00:16:06.430 "uuid": "2d5672ec-e838-4db2-95e4-7303cfa48f1b", 00:16:06.430 "is_configured": true, 00:16:06.430 "data_offset": 256, 00:16:06.430 "data_size": 7936 00:16:06.430 }, 00:16:06.430 { 00:16:06.430 "name": "BaseBdev2", 00:16:06.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.430 "is_configured": false, 00:16:06.430 "data_offset": 0, 00:16:06.430 "data_size": 0 00:16:06.430 } 00:16:06.430 ] 00:16:06.430 }' 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.430 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.001 [2024-11-21 04:14:06.739108] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:07.001 [2024-11-21 04:14:06.739154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.001 [2024-11-21 04:14:06.751118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:07.001 [2024-11-21 04:14:06.753341] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:07.001 [2024-11-21 04:14:06.753424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.001 "name": "Existed_Raid", 00:16:07.001 "uuid": "ff3b5211-eefe-47b8-9507-b99264679e76", 00:16:07.001 "strip_size_kb": 0, 00:16:07.001 "state": "configuring", 00:16:07.001 "raid_level": "raid1", 00:16:07.001 "superblock": true, 00:16:07.001 "num_base_bdevs": 2, 00:16:07.001 "num_base_bdevs_discovered": 1, 00:16:07.001 "num_base_bdevs_operational": 2, 00:16:07.001 "base_bdevs_list": [ 00:16:07.001 { 00:16:07.001 "name": "BaseBdev1", 00:16:07.001 "uuid": "2d5672ec-e838-4db2-95e4-7303cfa48f1b", 00:16:07.001 "is_configured": true, 00:16:07.001 "data_offset": 256, 00:16:07.001 "data_size": 7936 00:16:07.001 }, 00:16:07.001 { 00:16:07.001 "name": "BaseBdev2", 00:16:07.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.001 "is_configured": false, 00:16:07.001 "data_offset": 0, 00:16:07.001 "data_size": 0 00:16:07.001 } 00:16:07.001 ] 00:16:07.001 }' 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.001 04:14:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.261 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:16:07.261 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.519 [2024-11-21 04:14:07.250871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:07.519 [2024-11-21 04:14:07.251182] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:16:07.519 [2024-11-21 04:14:07.251248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:07.519 [2024-11-21 04:14:07.251594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:16:07.519 BaseBdev2 00:16:07.519 [2024-11-21 04:14:07.251796] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:16:07.519 [2024-11-21 04:14:07.251822] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:16:07.519 [2024-11-21 04:14:07.251969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.519 [ 00:16:07.519 { 00:16:07.519 "name": "BaseBdev2", 00:16:07.519 "aliases": [ 00:16:07.519 "c5e3c3a4-ecfe-4b22-8c29-d3233305cfc5" 00:16:07.519 ], 00:16:07.519 "product_name": "Malloc disk", 00:16:07.519 "block_size": 4096, 00:16:07.519 "num_blocks": 8192, 00:16:07.519 "uuid": "c5e3c3a4-ecfe-4b22-8c29-d3233305cfc5", 00:16:07.519 "assigned_rate_limits": { 00:16:07.519 "rw_ios_per_sec": 0, 00:16:07.519 "rw_mbytes_per_sec": 0, 00:16:07.519 "r_mbytes_per_sec": 0, 00:16:07.519 "w_mbytes_per_sec": 0 00:16:07.519 }, 00:16:07.519 "claimed": true, 00:16:07.519 "claim_type": "exclusive_write", 00:16:07.519 "zoned": false, 00:16:07.519 "supported_io_types": { 00:16:07.519 "read": true, 00:16:07.519 "write": true, 00:16:07.519 "unmap": true, 00:16:07.519 "flush": true, 00:16:07.519 "reset": true, 00:16:07.519 "nvme_admin": false, 00:16:07.519 "nvme_io": false, 00:16:07.519 "nvme_io_md": false, 00:16:07.519 "write_zeroes": true, 00:16:07.519 "zcopy": true, 00:16:07.519 "get_zone_info": false, 00:16:07.519 "zone_management": false, 00:16:07.519 "zone_append": false, 00:16:07.519 "compare": false, 00:16:07.519 "compare_and_write": false, 00:16:07.519 "abort": true, 00:16:07.519 "seek_hole": false, 00:16:07.519 "seek_data": false, 00:16:07.519 "copy": true, 00:16:07.519 "nvme_iov_md": false 00:16:07.519 }, 00:16:07.519 "memory_domains": [ 00:16:07.519 { 00:16:07.519 "dma_device_id": "system", 00:16:07.519 "dma_device_type": 1 00:16:07.519 }, 00:16:07.519 { 00:16:07.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.519 "dma_device_type": 2 00:16:07.519 } 00:16:07.519 ], 00:16:07.519 "driver_specific": {} 00:16:07.519 } 00:16:07.519 ] 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.519 "name": "Existed_Raid", 00:16:07.519 "uuid": "ff3b5211-eefe-47b8-9507-b99264679e76", 00:16:07.519 "strip_size_kb": 0, 00:16:07.519 "state": "online", 00:16:07.519 "raid_level": "raid1", 00:16:07.519 "superblock": true, 00:16:07.519 "num_base_bdevs": 2, 00:16:07.519 "num_base_bdevs_discovered": 2, 00:16:07.519 "num_base_bdevs_operational": 2, 00:16:07.519 "base_bdevs_list": [ 00:16:07.519 { 00:16:07.519 "name": "BaseBdev1", 00:16:07.519 "uuid": "2d5672ec-e838-4db2-95e4-7303cfa48f1b", 00:16:07.519 "is_configured": true, 00:16:07.519 "data_offset": 256, 00:16:07.519 "data_size": 7936 00:16:07.519 }, 00:16:07.519 { 00:16:07.519 "name": "BaseBdev2", 00:16:07.519 "uuid": "c5e3c3a4-ecfe-4b22-8c29-d3233305cfc5", 00:16:07.519 "is_configured": true, 00:16:07.519 "data_offset": 256, 00:16:07.519 "data_size": 7936 00:16:07.519 } 00:16:07.519 ] 00:16:07.519 }' 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.519 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.779 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:07.779 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:07.779 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:07.779 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:07.779 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:07.779 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:07.779 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:07.779 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:07.779 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.779 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.779 [2024-11-21 04:14:07.722376] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:07.779 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.779 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:07.779 "name": "Existed_Raid", 00:16:07.779 "aliases": [ 00:16:07.779 "ff3b5211-eefe-47b8-9507-b99264679e76" 00:16:07.779 ], 00:16:07.779 "product_name": "Raid Volume", 00:16:07.779 "block_size": 4096, 00:16:07.779 "num_blocks": 7936, 00:16:07.779 "uuid": "ff3b5211-eefe-47b8-9507-b99264679e76", 00:16:07.779 "assigned_rate_limits": { 00:16:07.779 "rw_ios_per_sec": 0, 00:16:07.779 "rw_mbytes_per_sec": 0, 00:16:07.779 "r_mbytes_per_sec": 0, 00:16:07.779 "w_mbytes_per_sec": 0 00:16:07.779 }, 00:16:07.779 "claimed": false, 00:16:07.779 "zoned": false, 00:16:07.779 "supported_io_types": { 00:16:07.779 "read": true, 00:16:07.779 "write": true, 00:16:07.779 "unmap": false, 00:16:07.779 "flush": false, 00:16:07.779 "reset": true, 00:16:07.779 "nvme_admin": false, 00:16:07.779 "nvme_io": false, 00:16:07.779 "nvme_io_md": false, 00:16:07.779 "write_zeroes": true, 00:16:07.779 "zcopy": false, 00:16:07.779 "get_zone_info": false, 00:16:07.779 "zone_management": false, 00:16:07.779 "zone_append": false, 00:16:07.779 "compare": false, 00:16:07.779 "compare_and_write": false, 00:16:07.779 "abort": false, 00:16:07.779 "seek_hole": false, 00:16:07.779 "seek_data": false, 00:16:07.779 "copy": false, 00:16:07.779 "nvme_iov_md": false 00:16:07.779 }, 00:16:07.779 "memory_domains": [ 00:16:07.779 { 00:16:07.779 "dma_device_id": "system", 00:16:07.779 "dma_device_type": 1 00:16:07.779 }, 00:16:07.779 { 00:16:07.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.779 "dma_device_type": 2 00:16:07.779 }, 00:16:07.779 { 00:16:07.779 "dma_device_id": "system", 00:16:07.779 "dma_device_type": 1 00:16:07.779 }, 00:16:07.779 { 00:16:07.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.779 "dma_device_type": 2 00:16:07.779 } 00:16:07.779 ], 00:16:07.779 "driver_specific": { 00:16:07.779 "raid": { 00:16:07.779 "uuid": "ff3b5211-eefe-47b8-9507-b99264679e76", 00:16:07.779 "strip_size_kb": 0, 00:16:07.779 "state": "online", 00:16:07.779 "raid_level": "raid1", 00:16:07.779 "superblock": true, 00:16:07.779 "num_base_bdevs": 2, 00:16:07.779 "num_base_bdevs_discovered": 2, 00:16:07.779 "num_base_bdevs_operational": 2, 00:16:07.779 "base_bdevs_list": [ 00:16:07.779 { 00:16:07.779 "name": "BaseBdev1", 00:16:07.779 "uuid": "2d5672ec-e838-4db2-95e4-7303cfa48f1b", 00:16:07.779 "is_configured": true, 00:16:07.779 "data_offset": 256, 00:16:07.779 "data_size": 7936 00:16:07.779 }, 00:16:07.779 { 00:16:07.779 "name": "BaseBdev2", 00:16:07.779 "uuid": "c5e3c3a4-ecfe-4b22-8c29-d3233305cfc5", 00:16:07.779 "is_configured": true, 00:16:07.779 "data_offset": 256, 00:16:07.779 "data_size": 7936 00:16:07.779 } 00:16:07.779 ] 00:16:07.779 } 00:16:07.779 } 00:16:07.779 }' 00:16:07.779 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:08.040 BaseBdev2' 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.040 [2024-11-21 04:14:07.953757] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.040 04:14:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.301 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.301 "name": "Existed_Raid", 00:16:08.301 "uuid": "ff3b5211-eefe-47b8-9507-b99264679e76", 00:16:08.301 "strip_size_kb": 0, 00:16:08.301 "state": "online", 00:16:08.301 "raid_level": "raid1", 00:16:08.301 "superblock": true, 00:16:08.301 "num_base_bdevs": 2, 00:16:08.301 "num_base_bdevs_discovered": 1, 00:16:08.301 "num_base_bdevs_operational": 1, 00:16:08.301 "base_bdevs_list": [ 00:16:08.301 { 00:16:08.301 "name": null, 00:16:08.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.301 "is_configured": false, 00:16:08.301 "data_offset": 0, 00:16:08.301 "data_size": 7936 00:16:08.301 }, 00:16:08.301 { 00:16:08.301 "name": "BaseBdev2", 00:16:08.301 "uuid": "c5e3c3a4-ecfe-4b22-8c29-d3233305cfc5", 00:16:08.301 "is_configured": true, 00:16:08.301 "data_offset": 256, 00:16:08.301 "data_size": 7936 00:16:08.301 } 00:16:08.301 ] 00:16:08.301 }' 00:16:08.301 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.301 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.563 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:08.563 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:08.563 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.563 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:08.563 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.563 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.563 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.563 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:08.563 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:08.563 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:08.563 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.563 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.563 [2024-11-21 04:14:08.501600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:08.563 [2024-11-21 04:14:08.501711] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:08.563 [2024-11-21 04:14:08.523071] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:08.563 [2024-11-21 04:14:08.523127] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:08.563 [2024-11-21 04:14:08.523148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:16:08.563 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.563 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:08.563 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:08.563 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.563 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:08.563 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.563 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.823 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.823 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:08.823 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:08.823 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:08.823 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 96364 00:16:08.823 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 96364 ']' 00:16:08.823 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 96364 00:16:08.823 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:16:08.823 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:08.823 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96364 00:16:08.823 killing process with pid 96364 00:16:08.823 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:08.823 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:08.823 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96364' 00:16:08.823 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 96364 00:16:08.823 [2024-11-21 04:14:08.619948] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:08.823 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 96364 00:16:08.823 [2024-11-21 04:14:08.621559] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:09.084 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:16:09.084 00:16:09.084 real 0m4.153s 00:16:09.084 user 0m6.384s 00:16:09.084 sys 0m0.956s 00:16:09.084 ************************************ 00:16:09.084 END TEST raid_state_function_test_sb_4k 00:16:09.084 ************************************ 00:16:09.084 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.084 04:14:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.084 04:14:09 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:16:09.084 04:14:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:09.084 04:14:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:09.084 04:14:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:09.084 ************************************ 00:16:09.084 START TEST raid_superblock_test_4k 00:16:09.084 ************************************ 00:16:09.084 04:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:16:09.084 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:09.084 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:09.084 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:09.084 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:09.084 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:09.084 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:09.084 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:09.084 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:09.084 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:09.084 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:09.084 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:09.084 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:09.084 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:09.084 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:09.084 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:09.084 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=96605 00:16:09.084 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:09.084 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 96605 00:16:09.084 04:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 96605 ']' 00:16:09.084 04:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.084 04:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:09.084 04:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.084 04:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:09.084 04:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.344 [2024-11-21 04:14:09.122629] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:16:09.344 [2024-11-21 04:14:09.122847] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96605 ] 00:16:09.344 [2024-11-21 04:14:09.283633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.604 [2024-11-21 04:14:09.324610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.604 [2024-11-21 04:14:09.402684] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.604 [2024-11-21 04:14:09.402833] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.175 malloc1 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.175 [2024-11-21 04:14:09.970627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:10.175 [2024-11-21 04:14:09.970755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.175 [2024-11-21 04:14:09.970793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:10.175 [2024-11-21 04:14:09.970853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.175 [2024-11-21 04:14:09.973332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.175 [2024-11-21 04:14:09.973406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:10.175 pt1 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.175 malloc2 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.175 04:14:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.175 [2024-11-21 04:14:10.005423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:10.175 [2024-11-21 04:14:10.005516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.175 [2024-11-21 04:14:10.005564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:10.175 [2024-11-21 04:14:10.005593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.175 [2024-11-21 04:14:10.008021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.175 [2024-11-21 04:14:10.008089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:10.175 pt2 00:16:10.175 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.175 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:10.175 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:10.175 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:10.175 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.175 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.175 [2024-11-21 04:14:10.017453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:10.175 [2024-11-21 04:14:10.019607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:10.175 [2024-11-21 04:14:10.019805] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:16:10.175 [2024-11-21 04:14:10.019852] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:10.175 [2024-11-21 04:14:10.020212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:16:10.175 [2024-11-21 04:14:10.020460] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:16:10.175 [2024-11-21 04:14:10.020504] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:16:10.175 [2024-11-21 04:14:10.020714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.175 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.175 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:10.175 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.175 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.175 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.176 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.176 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:10.176 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.176 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.176 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.176 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.176 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.176 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.176 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.176 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.176 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.176 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.176 "name": "raid_bdev1", 00:16:10.176 "uuid": "7d24cfd0-cba9-4556-a1cb-91de80e345d0", 00:16:10.176 "strip_size_kb": 0, 00:16:10.176 "state": "online", 00:16:10.176 "raid_level": "raid1", 00:16:10.176 "superblock": true, 00:16:10.176 "num_base_bdevs": 2, 00:16:10.176 "num_base_bdevs_discovered": 2, 00:16:10.176 "num_base_bdevs_operational": 2, 00:16:10.176 "base_bdevs_list": [ 00:16:10.176 { 00:16:10.176 "name": "pt1", 00:16:10.176 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:10.176 "is_configured": true, 00:16:10.176 "data_offset": 256, 00:16:10.176 "data_size": 7936 00:16:10.176 }, 00:16:10.176 { 00:16:10.176 "name": "pt2", 00:16:10.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:10.176 "is_configured": true, 00:16:10.176 "data_offset": 256, 00:16:10.176 "data_size": 7936 00:16:10.176 } 00:16:10.176 ] 00:16:10.176 }' 00:16:10.176 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.176 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.745 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:10.745 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:10.745 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:10.745 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:10.745 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:10.745 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:10.745 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:10.745 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.745 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.745 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:10.745 [2024-11-21 04:14:10.524822] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:10.745 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.745 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:10.745 "name": "raid_bdev1", 00:16:10.745 "aliases": [ 00:16:10.745 "7d24cfd0-cba9-4556-a1cb-91de80e345d0" 00:16:10.745 ], 00:16:10.745 "product_name": "Raid Volume", 00:16:10.745 "block_size": 4096, 00:16:10.745 "num_blocks": 7936, 00:16:10.745 "uuid": "7d24cfd0-cba9-4556-a1cb-91de80e345d0", 00:16:10.745 "assigned_rate_limits": { 00:16:10.745 "rw_ios_per_sec": 0, 00:16:10.745 "rw_mbytes_per_sec": 0, 00:16:10.745 "r_mbytes_per_sec": 0, 00:16:10.745 "w_mbytes_per_sec": 0 00:16:10.745 }, 00:16:10.745 "claimed": false, 00:16:10.745 "zoned": false, 00:16:10.745 "supported_io_types": { 00:16:10.745 "read": true, 00:16:10.745 "write": true, 00:16:10.745 "unmap": false, 00:16:10.745 "flush": false, 00:16:10.745 "reset": true, 00:16:10.745 "nvme_admin": false, 00:16:10.745 "nvme_io": false, 00:16:10.745 "nvme_io_md": false, 00:16:10.745 "write_zeroes": true, 00:16:10.745 "zcopy": false, 00:16:10.745 "get_zone_info": false, 00:16:10.745 "zone_management": false, 00:16:10.745 "zone_append": false, 00:16:10.745 "compare": false, 00:16:10.745 "compare_and_write": false, 00:16:10.745 "abort": false, 00:16:10.745 "seek_hole": false, 00:16:10.745 "seek_data": false, 00:16:10.745 "copy": false, 00:16:10.745 "nvme_iov_md": false 00:16:10.745 }, 00:16:10.745 "memory_domains": [ 00:16:10.746 { 00:16:10.746 "dma_device_id": "system", 00:16:10.746 "dma_device_type": 1 00:16:10.746 }, 00:16:10.746 { 00:16:10.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.746 "dma_device_type": 2 00:16:10.746 }, 00:16:10.746 { 00:16:10.746 "dma_device_id": "system", 00:16:10.746 "dma_device_type": 1 00:16:10.746 }, 00:16:10.746 { 00:16:10.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.746 "dma_device_type": 2 00:16:10.746 } 00:16:10.746 ], 00:16:10.746 "driver_specific": { 00:16:10.746 "raid": { 00:16:10.746 "uuid": "7d24cfd0-cba9-4556-a1cb-91de80e345d0", 00:16:10.746 "strip_size_kb": 0, 00:16:10.746 "state": "online", 00:16:10.746 "raid_level": "raid1", 00:16:10.746 "superblock": true, 00:16:10.746 "num_base_bdevs": 2, 00:16:10.746 "num_base_bdevs_discovered": 2, 00:16:10.746 "num_base_bdevs_operational": 2, 00:16:10.746 "base_bdevs_list": [ 00:16:10.746 { 00:16:10.746 "name": "pt1", 00:16:10.746 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:10.746 "is_configured": true, 00:16:10.746 "data_offset": 256, 00:16:10.746 "data_size": 7936 00:16:10.746 }, 00:16:10.746 { 00:16:10.746 "name": "pt2", 00:16:10.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:10.746 "is_configured": true, 00:16:10.746 "data_offset": 256, 00:16:10.746 "data_size": 7936 00:16:10.746 } 00:16:10.746 ] 00:16:10.746 } 00:16:10.746 } 00:16:10.746 }' 00:16:10.746 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:10.746 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:10.746 pt2' 00:16:10.746 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.746 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:10.746 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:10.746 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.746 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:10.746 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.746 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.746 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.009 [2024-11-21 04:14:10.780355] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7d24cfd0-cba9-4556-a1cb-91de80e345d0 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 7d24cfd0-cba9-4556-a1cb-91de80e345d0 ']' 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.009 [2024-11-21 04:14:10.824043] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:11.009 [2024-11-21 04:14:10.824074] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:11.009 [2024-11-21 04:14:10.824143] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.009 [2024-11-21 04:14:10.824211] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:11.009 [2024-11-21 04:14:10.824220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.009 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.010 [2024-11-21 04:14:10.963808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:11.010 [2024-11-21 04:14:10.965962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:11.010 [2024-11-21 04:14:10.966076] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:11.010 [2024-11-21 04:14:10.966122] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:11.010 [2024-11-21 04:14:10.966138] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:11.010 [2024-11-21 04:14:10.966146] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:16:11.010 request: 00:16:11.010 { 00:16:11.010 "name": "raid_bdev1", 00:16:11.010 "raid_level": "raid1", 00:16:11.010 "base_bdevs": [ 00:16:11.010 "malloc1", 00:16:11.010 "malloc2" 00:16:11.010 ], 00:16:11.010 "superblock": false, 00:16:11.010 "method": "bdev_raid_create", 00:16:11.010 "req_id": 1 00:16:11.010 } 00:16:11.010 Got JSON-RPC error response 00:16:11.010 response: 00:16:11.010 { 00:16:11.010 "code": -17, 00:16:11.010 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:11.010 } 00:16:11.010 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:11.010 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:16:11.010 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:11.010 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:11.010 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:11.010 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.303 04:14:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:11.303 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.303 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.303 04:14:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.303 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:11.303 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:11.303 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:11.303 04:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.303 04:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.303 [2024-11-21 04:14:11.027688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:11.303 [2024-11-21 04:14:11.027775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.303 [2024-11-21 04:14:11.027825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:11.303 [2024-11-21 04:14:11.027852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.303 [2024-11-21 04:14:11.030318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.303 [2024-11-21 04:14:11.030386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:11.303 [2024-11-21 04:14:11.030486] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:11.303 [2024-11-21 04:14:11.030561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:11.303 pt1 00:16:11.303 04:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.303 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:11.303 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.303 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.303 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.303 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.303 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:11.303 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.303 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.303 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.303 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.303 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.303 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.303 04:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.303 04:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.303 04:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.303 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.303 "name": "raid_bdev1", 00:16:11.303 "uuid": "7d24cfd0-cba9-4556-a1cb-91de80e345d0", 00:16:11.303 "strip_size_kb": 0, 00:16:11.303 "state": "configuring", 00:16:11.303 "raid_level": "raid1", 00:16:11.303 "superblock": true, 00:16:11.303 "num_base_bdevs": 2, 00:16:11.303 "num_base_bdevs_discovered": 1, 00:16:11.303 "num_base_bdevs_operational": 2, 00:16:11.303 "base_bdevs_list": [ 00:16:11.303 { 00:16:11.303 "name": "pt1", 00:16:11.303 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:11.303 "is_configured": true, 00:16:11.303 "data_offset": 256, 00:16:11.303 "data_size": 7936 00:16:11.303 }, 00:16:11.303 { 00:16:11.303 "name": null, 00:16:11.303 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:11.303 "is_configured": false, 00:16:11.303 "data_offset": 256, 00:16:11.303 "data_size": 7936 00:16:11.303 } 00:16:11.303 ] 00:16:11.303 }' 00:16:11.303 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.303 04:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.581 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:11.581 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:11.581 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:11.581 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:11.581 04:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.581 04:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.582 [2024-11-21 04:14:11.490880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:11.582 [2024-11-21 04:14:11.490979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.582 [2024-11-21 04:14:11.491014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:11.582 [2024-11-21 04:14:11.491040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.582 [2024-11-21 04:14:11.491471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.582 [2024-11-21 04:14:11.491531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:11.582 [2024-11-21 04:14:11.491634] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:11.582 [2024-11-21 04:14:11.491680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:11.582 [2024-11-21 04:14:11.491805] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:16:11.582 [2024-11-21 04:14:11.491842] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:11.582 [2024-11-21 04:14:11.492115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:16:11.582 [2024-11-21 04:14:11.492267] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:16:11.582 [2024-11-21 04:14:11.492284] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:16:11.582 [2024-11-21 04:14:11.492379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.582 pt2 00:16:11.582 04:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.582 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:11.582 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:11.582 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:11.582 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.582 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.582 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.582 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.582 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:11.582 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.582 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.582 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.582 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.582 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.582 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.582 04:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.582 04:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.582 04:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.582 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.582 "name": "raid_bdev1", 00:16:11.582 "uuid": "7d24cfd0-cba9-4556-a1cb-91de80e345d0", 00:16:11.582 "strip_size_kb": 0, 00:16:11.582 "state": "online", 00:16:11.582 "raid_level": "raid1", 00:16:11.582 "superblock": true, 00:16:11.582 "num_base_bdevs": 2, 00:16:11.582 "num_base_bdevs_discovered": 2, 00:16:11.582 "num_base_bdevs_operational": 2, 00:16:11.582 "base_bdevs_list": [ 00:16:11.582 { 00:16:11.582 "name": "pt1", 00:16:11.582 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:11.582 "is_configured": true, 00:16:11.582 "data_offset": 256, 00:16:11.582 "data_size": 7936 00:16:11.582 }, 00:16:11.582 { 00:16:11.582 "name": "pt2", 00:16:11.582 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:11.582 "is_configured": true, 00:16:11.582 "data_offset": 256, 00:16:11.582 "data_size": 7936 00:16:11.582 } 00:16:11.582 ] 00:16:11.582 }' 00:16:11.582 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.582 04:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.152 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:12.152 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:12.152 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:12.152 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:12.152 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:12.152 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:12.152 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:12.152 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:12.152 04:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.152 04:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.152 [2024-11-21 04:14:11.958403] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:12.152 04:14:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.152 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:12.152 "name": "raid_bdev1", 00:16:12.152 "aliases": [ 00:16:12.152 "7d24cfd0-cba9-4556-a1cb-91de80e345d0" 00:16:12.152 ], 00:16:12.152 "product_name": "Raid Volume", 00:16:12.152 "block_size": 4096, 00:16:12.152 "num_blocks": 7936, 00:16:12.152 "uuid": "7d24cfd0-cba9-4556-a1cb-91de80e345d0", 00:16:12.152 "assigned_rate_limits": { 00:16:12.152 "rw_ios_per_sec": 0, 00:16:12.152 "rw_mbytes_per_sec": 0, 00:16:12.152 "r_mbytes_per_sec": 0, 00:16:12.152 "w_mbytes_per_sec": 0 00:16:12.152 }, 00:16:12.152 "claimed": false, 00:16:12.152 "zoned": false, 00:16:12.152 "supported_io_types": { 00:16:12.152 "read": true, 00:16:12.152 "write": true, 00:16:12.152 "unmap": false, 00:16:12.152 "flush": false, 00:16:12.152 "reset": true, 00:16:12.152 "nvme_admin": false, 00:16:12.152 "nvme_io": false, 00:16:12.152 "nvme_io_md": false, 00:16:12.152 "write_zeroes": true, 00:16:12.152 "zcopy": false, 00:16:12.152 "get_zone_info": false, 00:16:12.152 "zone_management": false, 00:16:12.152 "zone_append": false, 00:16:12.152 "compare": false, 00:16:12.152 "compare_and_write": false, 00:16:12.152 "abort": false, 00:16:12.152 "seek_hole": false, 00:16:12.152 "seek_data": false, 00:16:12.152 "copy": false, 00:16:12.152 "nvme_iov_md": false 00:16:12.152 }, 00:16:12.152 "memory_domains": [ 00:16:12.152 { 00:16:12.152 "dma_device_id": "system", 00:16:12.152 "dma_device_type": 1 00:16:12.152 }, 00:16:12.152 { 00:16:12.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.152 "dma_device_type": 2 00:16:12.152 }, 00:16:12.152 { 00:16:12.152 "dma_device_id": "system", 00:16:12.152 "dma_device_type": 1 00:16:12.152 }, 00:16:12.152 { 00:16:12.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.152 "dma_device_type": 2 00:16:12.152 } 00:16:12.152 ], 00:16:12.152 "driver_specific": { 00:16:12.152 "raid": { 00:16:12.152 "uuid": "7d24cfd0-cba9-4556-a1cb-91de80e345d0", 00:16:12.152 "strip_size_kb": 0, 00:16:12.152 "state": "online", 00:16:12.152 "raid_level": "raid1", 00:16:12.152 "superblock": true, 00:16:12.152 "num_base_bdevs": 2, 00:16:12.152 "num_base_bdevs_discovered": 2, 00:16:12.152 "num_base_bdevs_operational": 2, 00:16:12.152 "base_bdevs_list": [ 00:16:12.152 { 00:16:12.152 "name": "pt1", 00:16:12.152 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:12.152 "is_configured": true, 00:16:12.152 "data_offset": 256, 00:16:12.152 "data_size": 7936 00:16:12.152 }, 00:16:12.152 { 00:16:12.152 "name": "pt2", 00:16:12.152 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:12.152 "is_configured": true, 00:16:12.152 "data_offset": 256, 00:16:12.152 "data_size": 7936 00:16:12.152 } 00:16:12.152 ] 00:16:12.152 } 00:16:12.152 } 00:16:12.152 }' 00:16:12.152 04:14:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:12.152 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:12.152 pt2' 00:16:12.152 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.152 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:12.152 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.152 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.152 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:12.152 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.152 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.152 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.152 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:12.152 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:12.152 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:12.413 [2024-11-21 04:14:12.181966] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 7d24cfd0-cba9-4556-a1cb-91de80e345d0 '!=' 7d24cfd0-cba9-4556-a1cb-91de80e345d0 ']' 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.413 [2024-11-21 04:14:12.229693] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.413 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.413 "name": "raid_bdev1", 00:16:12.413 "uuid": "7d24cfd0-cba9-4556-a1cb-91de80e345d0", 00:16:12.413 "strip_size_kb": 0, 00:16:12.413 "state": "online", 00:16:12.413 "raid_level": "raid1", 00:16:12.413 "superblock": true, 00:16:12.413 "num_base_bdevs": 2, 00:16:12.413 "num_base_bdevs_discovered": 1, 00:16:12.413 "num_base_bdevs_operational": 1, 00:16:12.413 "base_bdevs_list": [ 00:16:12.413 { 00:16:12.413 "name": null, 00:16:12.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.413 "is_configured": false, 00:16:12.414 "data_offset": 0, 00:16:12.414 "data_size": 7936 00:16:12.414 }, 00:16:12.414 { 00:16:12.414 "name": "pt2", 00:16:12.414 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:12.414 "is_configured": true, 00:16:12.414 "data_offset": 256, 00:16:12.414 "data_size": 7936 00:16:12.414 } 00:16:12.414 ] 00:16:12.414 }' 00:16:12.414 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.414 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.674 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:12.674 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.674 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.934 [2024-11-21 04:14:12.648965] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:12.934 [2024-11-21 04:14:12.649035] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:12.934 [2024-11-21 04:14:12.649124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:12.934 [2024-11-21 04:14:12.649214] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:12.934 [2024-11-21 04:14:12.649279] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:16:12.934 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.934 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:12.934 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.934 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.934 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.934 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.934 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:12.934 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:12.934 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:12.934 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:12.934 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:12.934 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.934 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.935 [2024-11-21 04:14:12.704861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:12.935 [2024-11-21 04:14:12.704914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.935 [2024-11-21 04:14:12.704935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:16:12.935 [2024-11-21 04:14:12.704944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.935 [2024-11-21 04:14:12.707421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.935 [2024-11-21 04:14:12.707454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:12.935 [2024-11-21 04:14:12.707523] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:12.935 [2024-11-21 04:14:12.707555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:12.935 [2024-11-21 04:14:12.707629] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:16:12.935 [2024-11-21 04:14:12.707641] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:12.935 [2024-11-21 04:14:12.707910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:16:12.935 [2024-11-21 04:14:12.708022] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:16:12.935 [2024-11-21 04:14:12.708033] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:16:12.935 [2024-11-21 04:14:12.708129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.935 pt2 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.935 "name": "raid_bdev1", 00:16:12.935 "uuid": "7d24cfd0-cba9-4556-a1cb-91de80e345d0", 00:16:12.935 "strip_size_kb": 0, 00:16:12.935 "state": "online", 00:16:12.935 "raid_level": "raid1", 00:16:12.935 "superblock": true, 00:16:12.935 "num_base_bdevs": 2, 00:16:12.935 "num_base_bdevs_discovered": 1, 00:16:12.935 "num_base_bdevs_operational": 1, 00:16:12.935 "base_bdevs_list": [ 00:16:12.935 { 00:16:12.935 "name": null, 00:16:12.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.935 "is_configured": false, 00:16:12.935 "data_offset": 256, 00:16:12.935 "data_size": 7936 00:16:12.935 }, 00:16:12.935 { 00:16:12.935 "name": "pt2", 00:16:12.935 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:12.935 "is_configured": true, 00:16:12.935 "data_offset": 256, 00:16:12.935 "data_size": 7936 00:16:12.935 } 00:16:12.935 ] 00:16:12.935 }' 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.935 04:14:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.195 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:13.195 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.195 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.195 [2024-11-21 04:14:13.152344] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:13.195 [2024-11-21 04:14:13.152408] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:13.195 [2024-11-21 04:14:13.152489] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:13.195 [2024-11-21 04:14:13.152561] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:13.195 [2024-11-21 04:14:13.152631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:16:13.195 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.195 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.195 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:13.195 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.195 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.455 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.455 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:13.455 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:13.455 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:13.455 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:13.455 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.455 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.455 [2024-11-21 04:14:13.216355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:13.455 [2024-11-21 04:14:13.216454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.455 [2024-11-21 04:14:13.216501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:16:13.455 [2024-11-21 04:14:13.216532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.455 [2024-11-21 04:14:13.218961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.455 [2024-11-21 04:14:13.219044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:13.455 [2024-11-21 04:14:13.219130] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:13.455 [2024-11-21 04:14:13.219205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:13.455 [2024-11-21 04:14:13.219393] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:13.455 [2024-11-21 04:14:13.219461] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:13.455 [2024-11-21 04:14:13.219500] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:16:13.455 [2024-11-21 04:14:13.219588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:13.455 [2024-11-21 04:14:13.219703] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:16:13.455 [2024-11-21 04:14:13.219743] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:13.455 [2024-11-21 04:14:13.220006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:16:13.455 [2024-11-21 04:14:13.220171] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:16:13.456 [2024-11-21 04:14:13.220211] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:16:13.456 [2024-11-21 04:14:13.220429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.456 pt1 00:16:13.456 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.456 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:13.456 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:13.456 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.456 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.456 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:13.456 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:13.456 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:13.456 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.456 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.456 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.456 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.456 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.456 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.456 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.456 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.456 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.456 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.456 "name": "raid_bdev1", 00:16:13.456 "uuid": "7d24cfd0-cba9-4556-a1cb-91de80e345d0", 00:16:13.456 "strip_size_kb": 0, 00:16:13.456 "state": "online", 00:16:13.456 "raid_level": "raid1", 00:16:13.456 "superblock": true, 00:16:13.456 "num_base_bdevs": 2, 00:16:13.456 "num_base_bdevs_discovered": 1, 00:16:13.456 "num_base_bdevs_operational": 1, 00:16:13.456 "base_bdevs_list": [ 00:16:13.456 { 00:16:13.456 "name": null, 00:16:13.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.456 "is_configured": false, 00:16:13.456 "data_offset": 256, 00:16:13.456 "data_size": 7936 00:16:13.456 }, 00:16:13.456 { 00:16:13.456 "name": "pt2", 00:16:13.456 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:13.456 "is_configured": true, 00:16:13.456 "data_offset": 256, 00:16:13.456 "data_size": 7936 00:16:13.456 } 00:16:13.456 ] 00:16:13.456 }' 00:16:13.456 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.456 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:14.026 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:14.026 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.026 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:14.026 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:14.026 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.026 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:14.026 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:14.026 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:14.026 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.026 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:14.026 [2024-11-21 04:14:13.787708] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:14.026 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.026 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 7d24cfd0-cba9-4556-a1cb-91de80e345d0 '!=' 7d24cfd0-cba9-4556-a1cb-91de80e345d0 ']' 00:16:14.026 04:14:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 96605 00:16:14.026 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 96605 ']' 00:16:14.026 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 96605 00:16:14.026 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:16:14.026 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:14.026 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96605 00:16:14.026 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:14.026 killing process with pid 96605 00:16:14.026 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:14.026 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96605' 00:16:14.026 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 96605 00:16:14.026 [2024-11-21 04:14:13.868552] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:14.026 [2024-11-21 04:14:13.868611] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.026 [2024-11-21 04:14:13.868650] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.026 [2024-11-21 04:14:13.868658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:16:14.026 04:14:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 96605 00:16:14.026 [2024-11-21 04:14:13.910141] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:14.287 04:14:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:16:14.287 00:16:14.287 real 0m5.205s 00:16:14.287 user 0m8.357s 00:16:14.287 sys 0m1.179s 00:16:14.287 04:14:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:14.287 ************************************ 00:16:14.287 END TEST raid_superblock_test_4k 00:16:14.287 ************************************ 00:16:14.287 04:14:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:14.548 04:14:14 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:16:14.548 04:14:14 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:16:14.548 04:14:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:14.548 04:14:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:14.548 04:14:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:14.548 ************************************ 00:16:14.548 START TEST raid_rebuild_test_sb_4k 00:16:14.548 ************************************ 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=96922 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 96922 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 96922 ']' 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:14.548 04:14:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:14.548 [2024-11-21 04:14:14.425332] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:16:14.548 [2024-11-21 04:14:14.425585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96922 ] 00:16:14.548 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:14.548 Zero copy mechanism will not be used. 00:16:14.808 [2024-11-21 04:14:14.581474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.809 [2024-11-21 04:14:14.620311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.809 [2024-11-21 04:14:14.698969] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.809 [2024-11-21 04:14:14.699074] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.380 BaseBdev1_malloc 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.380 [2024-11-21 04:14:15.254413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:15.380 [2024-11-21 04:14:15.254470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.380 [2024-11-21 04:14:15.254497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:15.380 [2024-11-21 04:14:15.254510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.380 [2024-11-21 04:14:15.257000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.380 [2024-11-21 04:14:15.257078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:15.380 BaseBdev1 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.380 BaseBdev2_malloc 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.380 [2024-11-21 04:14:15.289619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:15.380 [2024-11-21 04:14:15.289666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.380 [2024-11-21 04:14:15.289687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:15.380 [2024-11-21 04:14:15.289696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.380 [2024-11-21 04:14:15.292059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.380 [2024-11-21 04:14:15.292136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:15.380 BaseBdev2 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.380 spare_malloc 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.380 spare_delay 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.380 [2024-11-21 04:14:15.336709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:15.380 [2024-11-21 04:14:15.336765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.380 [2024-11-21 04:14:15.336786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:15.380 [2024-11-21 04:14:15.336794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.380 [2024-11-21 04:14:15.339150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.380 [2024-11-21 04:14:15.339240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:15.380 spare 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.380 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.380 [2024-11-21 04:14:15.348735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:15.380 [2024-11-21 04:14:15.350890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:15.641 [2024-11-21 04:14:15.351110] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:16:15.641 [2024-11-21 04:14:15.351127] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:15.641 [2024-11-21 04:14:15.351416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:16:15.641 [2024-11-21 04:14:15.351573] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:16:15.641 [2024-11-21 04:14:15.351592] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:16:15.641 [2024-11-21 04:14:15.351696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.641 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.641 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:15.641 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.641 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.641 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:15.641 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:15.641 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:15.641 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.641 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.641 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.641 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.641 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.641 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.641 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.641 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.641 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.641 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.641 "name": "raid_bdev1", 00:16:15.641 "uuid": "404efbdd-24c4-4760-924d-6d203acadd18", 00:16:15.641 "strip_size_kb": 0, 00:16:15.641 "state": "online", 00:16:15.641 "raid_level": "raid1", 00:16:15.641 "superblock": true, 00:16:15.641 "num_base_bdevs": 2, 00:16:15.641 "num_base_bdevs_discovered": 2, 00:16:15.641 "num_base_bdevs_operational": 2, 00:16:15.641 "base_bdevs_list": [ 00:16:15.641 { 00:16:15.641 "name": "BaseBdev1", 00:16:15.641 "uuid": "f3db9e7e-c76b-5fb3-b09f-f0b0082228f3", 00:16:15.641 "is_configured": true, 00:16:15.641 "data_offset": 256, 00:16:15.641 "data_size": 7936 00:16:15.641 }, 00:16:15.641 { 00:16:15.641 "name": "BaseBdev2", 00:16:15.641 "uuid": "e8525de6-b2d6-51a0-b89f-1d14e5d09195", 00:16:15.641 "is_configured": true, 00:16:15.641 "data_offset": 256, 00:16:15.641 "data_size": 7936 00:16:15.641 } 00:16:15.641 ] 00:16:15.641 }' 00:16:15.641 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.641 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.902 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:15.902 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:15.902 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.902 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.902 [2024-11-21 04:14:15.828366] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:15.902 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.902 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:15.902 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.902 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:15.902 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.902 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:16.162 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.162 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:16.162 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:16.162 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:16.162 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:16.163 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:16.163 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:16.163 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:16.163 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:16.163 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:16.163 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:16.163 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:16.163 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:16.163 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:16.163 04:14:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:16.163 [2024-11-21 04:14:16.091673] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:16:16.163 /dev/nbd0 00:16:16.423 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:16.423 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:16.423 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:16.423 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:16.423 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:16.423 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:16.423 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:16.423 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:16.423 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:16.423 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:16.423 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:16.423 1+0 records in 00:16:16.423 1+0 records out 00:16:16.423 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000607976 s, 6.7 MB/s 00:16:16.423 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.423 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:16.423 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.423 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:16.423 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:16.423 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:16.423 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:16.423 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:16.423 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:16.423 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:16.994 7936+0 records in 00:16:16.994 7936+0 records out 00:16:16.994 32505856 bytes (33 MB, 31 MiB) copied, 0.60238 s, 54.0 MB/s 00:16:16.994 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:16.994 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:16.994 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:16.994 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:16.994 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:16.994 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:16.994 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:17.254 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:17.254 [2024-11-21 04:14:16.988136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.254 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:17.254 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:17.254 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:17.254 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:17.254 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:17.254 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:17.254 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:17.254 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:17.254 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.254 04:14:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:17.254 [2024-11-21 04:14:17.004304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:17.254 04:14:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.254 04:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:17.254 04:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.254 04:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.254 04:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.254 04:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.254 04:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:17.254 04:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.254 04:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.254 04:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.254 04:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.254 04:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.254 04:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.254 04:14:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.254 04:14:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:17.254 04:14:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.254 04:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.254 "name": "raid_bdev1", 00:16:17.254 "uuid": "404efbdd-24c4-4760-924d-6d203acadd18", 00:16:17.254 "strip_size_kb": 0, 00:16:17.254 "state": "online", 00:16:17.254 "raid_level": "raid1", 00:16:17.254 "superblock": true, 00:16:17.254 "num_base_bdevs": 2, 00:16:17.254 "num_base_bdevs_discovered": 1, 00:16:17.254 "num_base_bdevs_operational": 1, 00:16:17.254 "base_bdevs_list": [ 00:16:17.254 { 00:16:17.254 "name": null, 00:16:17.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.254 "is_configured": false, 00:16:17.254 "data_offset": 0, 00:16:17.254 "data_size": 7936 00:16:17.254 }, 00:16:17.254 { 00:16:17.254 "name": "BaseBdev2", 00:16:17.254 "uuid": "e8525de6-b2d6-51a0-b89f-1d14e5d09195", 00:16:17.254 "is_configured": true, 00:16:17.254 "data_offset": 256, 00:16:17.254 "data_size": 7936 00:16:17.254 } 00:16:17.254 ] 00:16:17.254 }' 00:16:17.254 04:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.254 04:14:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:17.515 04:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:17.515 04:14:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.515 04:14:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:17.515 [2024-11-21 04:14:17.455371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:17.515 [2024-11-21 04:14:17.473178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:16:17.515 04:14:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.515 04:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:17.515 [2024-11-21 04:14:17.475785] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.898 "name": "raid_bdev1", 00:16:18.898 "uuid": "404efbdd-24c4-4760-924d-6d203acadd18", 00:16:18.898 "strip_size_kb": 0, 00:16:18.898 "state": "online", 00:16:18.898 "raid_level": "raid1", 00:16:18.898 "superblock": true, 00:16:18.898 "num_base_bdevs": 2, 00:16:18.898 "num_base_bdevs_discovered": 2, 00:16:18.898 "num_base_bdevs_operational": 2, 00:16:18.898 "process": { 00:16:18.898 "type": "rebuild", 00:16:18.898 "target": "spare", 00:16:18.898 "progress": { 00:16:18.898 "blocks": 2560, 00:16:18.898 "percent": 32 00:16:18.898 } 00:16:18.898 }, 00:16:18.898 "base_bdevs_list": [ 00:16:18.898 { 00:16:18.898 "name": "spare", 00:16:18.898 "uuid": "c424ba18-10af-5fcd-a874-f6841a1b7f92", 00:16:18.898 "is_configured": true, 00:16:18.898 "data_offset": 256, 00:16:18.898 "data_size": 7936 00:16:18.898 }, 00:16:18.898 { 00:16:18.898 "name": "BaseBdev2", 00:16:18.898 "uuid": "e8525de6-b2d6-51a0-b89f-1d14e5d09195", 00:16:18.898 "is_configured": true, 00:16:18.898 "data_offset": 256, 00:16:18.898 "data_size": 7936 00:16:18.898 } 00:16:18.898 ] 00:16:18.898 }' 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:18.898 [2024-11-21 04:14:18.636209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:18.898 [2024-11-21 04:14:18.684143] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:18.898 [2024-11-21 04:14:18.684263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.898 [2024-11-21 04:14:18.684307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:18.898 [2024-11-21 04:14:18.684328] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.898 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.898 "name": "raid_bdev1", 00:16:18.898 "uuid": "404efbdd-24c4-4760-924d-6d203acadd18", 00:16:18.898 "strip_size_kb": 0, 00:16:18.898 "state": "online", 00:16:18.898 "raid_level": "raid1", 00:16:18.898 "superblock": true, 00:16:18.898 "num_base_bdevs": 2, 00:16:18.898 "num_base_bdevs_discovered": 1, 00:16:18.898 "num_base_bdevs_operational": 1, 00:16:18.898 "base_bdevs_list": [ 00:16:18.898 { 00:16:18.898 "name": null, 00:16:18.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.898 "is_configured": false, 00:16:18.898 "data_offset": 0, 00:16:18.898 "data_size": 7936 00:16:18.899 }, 00:16:18.899 { 00:16:18.899 "name": "BaseBdev2", 00:16:18.899 "uuid": "e8525de6-b2d6-51a0-b89f-1d14e5d09195", 00:16:18.899 "is_configured": true, 00:16:18.899 "data_offset": 256, 00:16:18.899 "data_size": 7936 00:16:18.899 } 00:16:18.899 ] 00:16:18.899 }' 00:16:18.899 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.899 04:14:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:19.468 04:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:19.468 04:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.468 04:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:19.468 04:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:19.468 04:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.468 04:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.468 04:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.468 04:14:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.468 04:14:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:19.468 04:14:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.468 04:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.468 "name": "raid_bdev1", 00:16:19.468 "uuid": "404efbdd-24c4-4760-924d-6d203acadd18", 00:16:19.468 "strip_size_kb": 0, 00:16:19.468 "state": "online", 00:16:19.468 "raid_level": "raid1", 00:16:19.468 "superblock": true, 00:16:19.468 "num_base_bdevs": 2, 00:16:19.468 "num_base_bdevs_discovered": 1, 00:16:19.468 "num_base_bdevs_operational": 1, 00:16:19.468 "base_bdevs_list": [ 00:16:19.468 { 00:16:19.468 "name": null, 00:16:19.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.469 "is_configured": false, 00:16:19.469 "data_offset": 0, 00:16:19.469 "data_size": 7936 00:16:19.469 }, 00:16:19.469 { 00:16:19.469 "name": "BaseBdev2", 00:16:19.469 "uuid": "e8525de6-b2d6-51a0-b89f-1d14e5d09195", 00:16:19.469 "is_configured": true, 00:16:19.469 "data_offset": 256, 00:16:19.469 "data_size": 7936 00:16:19.469 } 00:16:19.469 ] 00:16:19.469 }' 00:16:19.469 04:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.469 04:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:19.469 04:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.469 04:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:19.469 04:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:19.469 04:14:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.469 04:14:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:19.469 [2024-11-21 04:14:19.343013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:19.469 [2024-11-21 04:14:19.349982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ca30 00:16:19.469 04:14:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.469 04:14:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:19.469 [2024-11-21 04:14:19.352263] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:20.408 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.408 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.408 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.408 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.408 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.408 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.408 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.408 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.408 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.669 "name": "raid_bdev1", 00:16:20.669 "uuid": "404efbdd-24c4-4760-924d-6d203acadd18", 00:16:20.669 "strip_size_kb": 0, 00:16:20.669 "state": "online", 00:16:20.669 "raid_level": "raid1", 00:16:20.669 "superblock": true, 00:16:20.669 "num_base_bdevs": 2, 00:16:20.669 "num_base_bdevs_discovered": 2, 00:16:20.669 "num_base_bdevs_operational": 2, 00:16:20.669 "process": { 00:16:20.669 "type": "rebuild", 00:16:20.669 "target": "spare", 00:16:20.669 "progress": { 00:16:20.669 "blocks": 2560, 00:16:20.669 "percent": 32 00:16:20.669 } 00:16:20.669 }, 00:16:20.669 "base_bdevs_list": [ 00:16:20.669 { 00:16:20.669 "name": "spare", 00:16:20.669 "uuid": "c424ba18-10af-5fcd-a874-f6841a1b7f92", 00:16:20.669 "is_configured": true, 00:16:20.669 "data_offset": 256, 00:16:20.669 "data_size": 7936 00:16:20.669 }, 00:16:20.669 { 00:16:20.669 "name": "BaseBdev2", 00:16:20.669 "uuid": "e8525de6-b2d6-51a0-b89f-1d14e5d09195", 00:16:20.669 "is_configured": true, 00:16:20.669 "data_offset": 256, 00:16:20.669 "data_size": 7936 00:16:20.669 } 00:16:20.669 ] 00:16:20.669 }' 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:20.669 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=576 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.669 "name": "raid_bdev1", 00:16:20.669 "uuid": "404efbdd-24c4-4760-924d-6d203acadd18", 00:16:20.669 "strip_size_kb": 0, 00:16:20.669 "state": "online", 00:16:20.669 "raid_level": "raid1", 00:16:20.669 "superblock": true, 00:16:20.669 "num_base_bdevs": 2, 00:16:20.669 "num_base_bdevs_discovered": 2, 00:16:20.669 "num_base_bdevs_operational": 2, 00:16:20.669 "process": { 00:16:20.669 "type": "rebuild", 00:16:20.669 "target": "spare", 00:16:20.669 "progress": { 00:16:20.669 "blocks": 2816, 00:16:20.669 "percent": 35 00:16:20.669 } 00:16:20.669 }, 00:16:20.669 "base_bdevs_list": [ 00:16:20.669 { 00:16:20.669 "name": "spare", 00:16:20.669 "uuid": "c424ba18-10af-5fcd-a874-f6841a1b7f92", 00:16:20.669 "is_configured": true, 00:16:20.669 "data_offset": 256, 00:16:20.669 "data_size": 7936 00:16:20.669 }, 00:16:20.669 { 00:16:20.669 "name": "BaseBdev2", 00:16:20.669 "uuid": "e8525de6-b2d6-51a0-b89f-1d14e5d09195", 00:16:20.669 "is_configured": true, 00:16:20.669 "data_offset": 256, 00:16:20.669 "data_size": 7936 00:16:20.669 } 00:16:20.669 ] 00:16:20.669 }' 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.669 04:14:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:22.052 04:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:22.052 04:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.052 04:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.052 04:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.052 04:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.052 04:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.052 04:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.052 04:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.052 04:14:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.052 04:14:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:22.052 04:14:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.052 04:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.052 "name": "raid_bdev1", 00:16:22.052 "uuid": "404efbdd-24c4-4760-924d-6d203acadd18", 00:16:22.052 "strip_size_kb": 0, 00:16:22.052 "state": "online", 00:16:22.052 "raid_level": "raid1", 00:16:22.052 "superblock": true, 00:16:22.052 "num_base_bdevs": 2, 00:16:22.052 "num_base_bdevs_discovered": 2, 00:16:22.052 "num_base_bdevs_operational": 2, 00:16:22.052 "process": { 00:16:22.052 "type": "rebuild", 00:16:22.052 "target": "spare", 00:16:22.052 "progress": { 00:16:22.052 "blocks": 5632, 00:16:22.052 "percent": 70 00:16:22.052 } 00:16:22.052 }, 00:16:22.052 "base_bdevs_list": [ 00:16:22.052 { 00:16:22.052 "name": "spare", 00:16:22.052 "uuid": "c424ba18-10af-5fcd-a874-f6841a1b7f92", 00:16:22.052 "is_configured": true, 00:16:22.052 "data_offset": 256, 00:16:22.052 "data_size": 7936 00:16:22.052 }, 00:16:22.052 { 00:16:22.052 "name": "BaseBdev2", 00:16:22.052 "uuid": "e8525de6-b2d6-51a0-b89f-1d14e5d09195", 00:16:22.052 "is_configured": true, 00:16:22.052 "data_offset": 256, 00:16:22.052 "data_size": 7936 00:16:22.052 } 00:16:22.052 ] 00:16:22.052 }' 00:16:22.052 04:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.052 04:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.052 04:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.052 04:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.052 04:14:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:22.622 [2024-11-21 04:14:22.471563] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:22.622 [2024-11-21 04:14:22.471687] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:22.622 [2024-11-21 04:14:22.471825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.881 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:22.881 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.881 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.882 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.882 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.882 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.882 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.882 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.882 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.882 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:22.882 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.882 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.882 "name": "raid_bdev1", 00:16:22.882 "uuid": "404efbdd-24c4-4760-924d-6d203acadd18", 00:16:22.882 "strip_size_kb": 0, 00:16:22.882 "state": "online", 00:16:22.882 "raid_level": "raid1", 00:16:22.882 "superblock": true, 00:16:22.882 "num_base_bdevs": 2, 00:16:22.882 "num_base_bdevs_discovered": 2, 00:16:22.882 "num_base_bdevs_operational": 2, 00:16:22.882 "base_bdevs_list": [ 00:16:22.882 { 00:16:22.882 "name": "spare", 00:16:22.882 "uuid": "c424ba18-10af-5fcd-a874-f6841a1b7f92", 00:16:22.882 "is_configured": true, 00:16:22.882 "data_offset": 256, 00:16:22.882 "data_size": 7936 00:16:22.882 }, 00:16:22.882 { 00:16:22.882 "name": "BaseBdev2", 00:16:22.882 "uuid": "e8525de6-b2d6-51a0-b89f-1d14e5d09195", 00:16:22.882 "is_configured": true, 00:16:22.882 "data_offset": 256, 00:16:22.882 "data_size": 7936 00:16:22.882 } 00:16:22.882 ] 00:16:22.882 }' 00:16:22.882 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.142 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:23.142 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.142 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:23.142 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:16:23.142 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:23.142 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.142 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:23.142 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:23.142 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.142 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.142 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.142 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.142 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:23.142 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.142 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.142 "name": "raid_bdev1", 00:16:23.142 "uuid": "404efbdd-24c4-4760-924d-6d203acadd18", 00:16:23.142 "strip_size_kb": 0, 00:16:23.142 "state": "online", 00:16:23.142 "raid_level": "raid1", 00:16:23.142 "superblock": true, 00:16:23.142 "num_base_bdevs": 2, 00:16:23.142 "num_base_bdevs_discovered": 2, 00:16:23.142 "num_base_bdevs_operational": 2, 00:16:23.142 "base_bdevs_list": [ 00:16:23.142 { 00:16:23.142 "name": "spare", 00:16:23.142 "uuid": "c424ba18-10af-5fcd-a874-f6841a1b7f92", 00:16:23.142 "is_configured": true, 00:16:23.142 "data_offset": 256, 00:16:23.142 "data_size": 7936 00:16:23.142 }, 00:16:23.142 { 00:16:23.142 "name": "BaseBdev2", 00:16:23.142 "uuid": "e8525de6-b2d6-51a0-b89f-1d14e5d09195", 00:16:23.142 "is_configured": true, 00:16:23.142 "data_offset": 256, 00:16:23.142 "data_size": 7936 00:16:23.142 } 00:16:23.142 ] 00:16:23.142 }' 00:16:23.142 04:14:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.142 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:23.142 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.142 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:23.142 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:23.142 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.142 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.142 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.142 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.142 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:23.142 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.142 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.142 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.142 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.142 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.142 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.142 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:23.142 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.142 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.400 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.400 "name": "raid_bdev1", 00:16:23.400 "uuid": "404efbdd-24c4-4760-924d-6d203acadd18", 00:16:23.400 "strip_size_kb": 0, 00:16:23.400 "state": "online", 00:16:23.400 "raid_level": "raid1", 00:16:23.400 "superblock": true, 00:16:23.400 "num_base_bdevs": 2, 00:16:23.400 "num_base_bdevs_discovered": 2, 00:16:23.400 "num_base_bdevs_operational": 2, 00:16:23.400 "base_bdevs_list": [ 00:16:23.400 { 00:16:23.400 "name": "spare", 00:16:23.400 "uuid": "c424ba18-10af-5fcd-a874-f6841a1b7f92", 00:16:23.400 "is_configured": true, 00:16:23.400 "data_offset": 256, 00:16:23.400 "data_size": 7936 00:16:23.400 }, 00:16:23.400 { 00:16:23.400 "name": "BaseBdev2", 00:16:23.400 "uuid": "e8525de6-b2d6-51a0-b89f-1d14e5d09195", 00:16:23.400 "is_configured": true, 00:16:23.400 "data_offset": 256, 00:16:23.400 "data_size": 7936 00:16:23.400 } 00:16:23.400 ] 00:16:23.400 }' 00:16:23.400 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.400 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:23.659 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:23.659 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.659 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:23.659 [2024-11-21 04:14:23.528724] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:23.659 [2024-11-21 04:14:23.528753] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:23.659 [2024-11-21 04:14:23.528852] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:23.659 [2024-11-21 04:14:23.528917] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:23.659 [2024-11-21 04:14:23.528938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:16:23.659 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.659 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.659 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:16:23.659 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.659 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:23.659 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.659 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:23.659 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:23.659 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:23.659 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:23.659 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:23.659 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:23.659 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:23.659 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:23.659 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:23.659 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:23.659 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:23.659 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:23.659 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:23.920 /dev/nbd0 00:16:23.920 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:23.920 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:23.920 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:23.920 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:23.920 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:23.920 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:23.920 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:23.920 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:23.920 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:23.920 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:23.920 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:23.920 1+0 records in 00:16:23.920 1+0 records out 00:16:23.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466062 s, 8.8 MB/s 00:16:23.920 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:23.920 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:23.920 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:23.920 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:23.920 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:23.920 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:23.920 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:23.920 04:14:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:24.180 /dev/nbd1 00:16:24.180 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:24.180 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:24.180 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:24.180 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:24.180 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:24.180 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:24.180 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:24.180 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:24.180 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:24.180 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:24.180 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:24.180 1+0 records in 00:16:24.180 1+0 records out 00:16:24.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409305 s, 10.0 MB/s 00:16:24.180 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:24.180 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:24.180 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:24.180 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:24.180 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:24.180 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:24.180 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:24.180 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:24.440 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:24.440 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:24.440 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:24.440 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:24.440 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:24.440 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:24.440 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:24.440 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:24.440 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:24.440 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:24.441 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:24.441 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:24.441 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:24.441 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:24.441 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:24.441 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:24.441 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:24.701 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:24.701 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:24.701 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:24.701 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:24.701 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:24.701 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:24.701 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:24.701 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:24.701 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:24.701 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:24.701 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.701 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:24.701 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.701 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:24.701 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.701 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:24.701 [2024-11-21 04:14:24.611379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:24.701 [2024-11-21 04:14:24.611439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.701 [2024-11-21 04:14:24.611462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:24.701 [2024-11-21 04:14:24.611477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.701 [2024-11-21 04:14:24.613947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.701 [2024-11-21 04:14:24.613990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:24.701 [2024-11-21 04:14:24.614082] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:24.701 [2024-11-21 04:14:24.614135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:24.701 [2024-11-21 04:14:24.614289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:24.701 spare 00:16:24.701 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.701 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:24.701 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.701 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:24.961 [2024-11-21 04:14:24.714191] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:16:24.961 [2024-11-21 04:14:24.714270] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:24.961 [2024-11-21 04:14:24.714597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb1b0 00:16:24.961 [2024-11-21 04:14:24.714766] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:16:24.961 [2024-11-21 04:14:24.714780] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:16:24.961 [2024-11-21 04:14:24.714916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.961 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.961 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:24.961 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.961 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.961 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.961 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.961 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:24.961 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.961 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.961 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.961 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.961 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.961 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.961 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:24.961 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.961 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.961 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.961 "name": "raid_bdev1", 00:16:24.961 "uuid": "404efbdd-24c4-4760-924d-6d203acadd18", 00:16:24.961 "strip_size_kb": 0, 00:16:24.961 "state": "online", 00:16:24.961 "raid_level": "raid1", 00:16:24.961 "superblock": true, 00:16:24.961 "num_base_bdevs": 2, 00:16:24.961 "num_base_bdevs_discovered": 2, 00:16:24.961 "num_base_bdevs_operational": 2, 00:16:24.961 "base_bdevs_list": [ 00:16:24.961 { 00:16:24.961 "name": "spare", 00:16:24.961 "uuid": "c424ba18-10af-5fcd-a874-f6841a1b7f92", 00:16:24.961 "is_configured": true, 00:16:24.961 "data_offset": 256, 00:16:24.961 "data_size": 7936 00:16:24.961 }, 00:16:24.961 { 00:16:24.961 "name": "BaseBdev2", 00:16:24.961 "uuid": "e8525de6-b2d6-51a0-b89f-1d14e5d09195", 00:16:24.961 "is_configured": true, 00:16:24.961 "data_offset": 256, 00:16:24.961 "data_size": 7936 00:16:24.961 } 00:16:24.961 ] 00:16:24.961 }' 00:16:24.961 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.961 04:14:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:25.221 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:25.221 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.221 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:25.221 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:25.221 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.221 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.221 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.221 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.221 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:25.221 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.481 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.481 "name": "raid_bdev1", 00:16:25.481 "uuid": "404efbdd-24c4-4760-924d-6d203acadd18", 00:16:25.481 "strip_size_kb": 0, 00:16:25.481 "state": "online", 00:16:25.481 "raid_level": "raid1", 00:16:25.481 "superblock": true, 00:16:25.481 "num_base_bdevs": 2, 00:16:25.481 "num_base_bdevs_discovered": 2, 00:16:25.481 "num_base_bdevs_operational": 2, 00:16:25.481 "base_bdevs_list": [ 00:16:25.481 { 00:16:25.481 "name": "spare", 00:16:25.481 "uuid": "c424ba18-10af-5fcd-a874-f6841a1b7f92", 00:16:25.481 "is_configured": true, 00:16:25.481 "data_offset": 256, 00:16:25.481 "data_size": 7936 00:16:25.481 }, 00:16:25.481 { 00:16:25.481 "name": "BaseBdev2", 00:16:25.481 "uuid": "e8525de6-b2d6-51a0-b89f-1d14e5d09195", 00:16:25.481 "is_configured": true, 00:16:25.481 "data_offset": 256, 00:16:25.481 "data_size": 7936 00:16:25.481 } 00:16:25.481 ] 00:16:25.481 }' 00:16:25.481 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.481 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:25.481 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.481 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:25.481 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.481 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:25.481 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.481 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:25.481 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.481 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.481 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:25.481 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.481 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:25.481 [2024-11-21 04:14:25.370096] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:25.481 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.481 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:25.481 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.481 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.481 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.481 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.481 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:25.481 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.481 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.481 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.481 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.481 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.482 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.482 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.482 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:25.482 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.482 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.482 "name": "raid_bdev1", 00:16:25.482 "uuid": "404efbdd-24c4-4760-924d-6d203acadd18", 00:16:25.482 "strip_size_kb": 0, 00:16:25.482 "state": "online", 00:16:25.482 "raid_level": "raid1", 00:16:25.482 "superblock": true, 00:16:25.482 "num_base_bdevs": 2, 00:16:25.482 "num_base_bdevs_discovered": 1, 00:16:25.482 "num_base_bdevs_operational": 1, 00:16:25.482 "base_bdevs_list": [ 00:16:25.482 { 00:16:25.482 "name": null, 00:16:25.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.482 "is_configured": false, 00:16:25.482 "data_offset": 0, 00:16:25.482 "data_size": 7936 00:16:25.482 }, 00:16:25.482 { 00:16:25.482 "name": "BaseBdev2", 00:16:25.482 "uuid": "e8525de6-b2d6-51a0-b89f-1d14e5d09195", 00:16:25.482 "is_configured": true, 00:16:25.482 "data_offset": 256, 00:16:25.482 "data_size": 7936 00:16:25.482 } 00:16:25.482 ] 00:16:25.482 }' 00:16:25.482 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.482 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:26.051 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:26.051 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.051 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:26.051 [2024-11-21 04:14:25.833326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:26.051 [2024-11-21 04:14:25.833517] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:26.052 [2024-11-21 04:14:25.833548] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:26.052 [2024-11-21 04:14:25.833599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:26.052 [2024-11-21 04:14:25.842034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb280 00:16:26.052 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.052 04:14:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:26.052 [2024-11-21 04:14:25.844167] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:26.992 04:14:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.992 04:14:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.992 04:14:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.992 04:14:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.992 04:14:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.992 04:14:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.992 04:14:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.992 04:14:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.992 04:14:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:26.992 04:14:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.992 04:14:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.992 "name": "raid_bdev1", 00:16:26.992 "uuid": "404efbdd-24c4-4760-924d-6d203acadd18", 00:16:26.992 "strip_size_kb": 0, 00:16:26.992 "state": "online", 00:16:26.992 "raid_level": "raid1", 00:16:26.992 "superblock": true, 00:16:26.992 "num_base_bdevs": 2, 00:16:26.992 "num_base_bdevs_discovered": 2, 00:16:26.992 "num_base_bdevs_operational": 2, 00:16:26.992 "process": { 00:16:26.992 "type": "rebuild", 00:16:26.992 "target": "spare", 00:16:26.992 "progress": { 00:16:26.992 "blocks": 2560, 00:16:26.992 "percent": 32 00:16:26.992 } 00:16:26.992 }, 00:16:26.992 "base_bdevs_list": [ 00:16:26.992 { 00:16:26.992 "name": "spare", 00:16:26.992 "uuid": "c424ba18-10af-5fcd-a874-f6841a1b7f92", 00:16:26.992 "is_configured": true, 00:16:26.992 "data_offset": 256, 00:16:26.992 "data_size": 7936 00:16:26.992 }, 00:16:26.992 { 00:16:26.992 "name": "BaseBdev2", 00:16:26.992 "uuid": "e8525de6-b2d6-51a0-b89f-1d14e5d09195", 00:16:26.992 "is_configured": true, 00:16:26.992 "data_offset": 256, 00:16:26.992 "data_size": 7936 00:16:26.992 } 00:16:26.992 ] 00:16:26.992 }' 00:16:26.992 04:14:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.992 04:14:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:26.992 04:14:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.253 04:14:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.253 04:14:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:27.253 04:14:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.253 04:14:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:27.253 [2024-11-21 04:14:27.000406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:27.253 [2024-11-21 04:14:27.051563] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:27.253 [2024-11-21 04:14:27.051658] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.253 [2024-11-21 04:14:27.051679] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:27.253 [2024-11-21 04:14:27.051686] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:27.253 04:14:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.253 04:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:27.253 04:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.253 04:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.253 04:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.253 04:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.253 04:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:27.253 04:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.253 04:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.253 04:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.253 04:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.253 04:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.253 04:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.253 04:14:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.253 04:14:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:27.253 04:14:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.253 04:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.253 "name": "raid_bdev1", 00:16:27.253 "uuid": "404efbdd-24c4-4760-924d-6d203acadd18", 00:16:27.253 "strip_size_kb": 0, 00:16:27.253 "state": "online", 00:16:27.253 "raid_level": "raid1", 00:16:27.253 "superblock": true, 00:16:27.253 "num_base_bdevs": 2, 00:16:27.253 "num_base_bdevs_discovered": 1, 00:16:27.253 "num_base_bdevs_operational": 1, 00:16:27.253 "base_bdevs_list": [ 00:16:27.253 { 00:16:27.253 "name": null, 00:16:27.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.253 "is_configured": false, 00:16:27.253 "data_offset": 0, 00:16:27.253 "data_size": 7936 00:16:27.253 }, 00:16:27.253 { 00:16:27.253 "name": "BaseBdev2", 00:16:27.253 "uuid": "e8525de6-b2d6-51a0-b89f-1d14e5d09195", 00:16:27.253 "is_configured": true, 00:16:27.253 "data_offset": 256, 00:16:27.253 "data_size": 7936 00:16:27.253 } 00:16:27.253 ] 00:16:27.253 }' 00:16:27.253 04:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.253 04:14:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:27.824 04:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:27.824 04:14:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.824 04:14:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:27.824 [2024-11-21 04:14:27.490105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:27.824 [2024-11-21 04:14:27.490219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.824 [2024-11-21 04:14:27.490279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:27.824 [2024-11-21 04:14:27.490311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.824 [2024-11-21 04:14:27.490837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.824 [2024-11-21 04:14:27.490894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:27.824 [2024-11-21 04:14:27.491021] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:27.824 [2024-11-21 04:14:27.491066] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:27.824 [2024-11-21 04:14:27.491132] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:27.824 [2024-11-21 04:14:27.491193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:27.824 [2024-11-21 04:14:27.497545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:16:27.824 spare 00:16:27.824 04:14:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.824 04:14:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:27.824 [2024-11-21 04:14:27.499727] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:28.764 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.764 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.764 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.764 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.764 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.764 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.764 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.764 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.764 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:28.764 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.764 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.764 "name": "raid_bdev1", 00:16:28.764 "uuid": "404efbdd-24c4-4760-924d-6d203acadd18", 00:16:28.764 "strip_size_kb": 0, 00:16:28.764 "state": "online", 00:16:28.764 "raid_level": "raid1", 00:16:28.764 "superblock": true, 00:16:28.764 "num_base_bdevs": 2, 00:16:28.764 "num_base_bdevs_discovered": 2, 00:16:28.764 "num_base_bdevs_operational": 2, 00:16:28.764 "process": { 00:16:28.764 "type": "rebuild", 00:16:28.764 "target": "spare", 00:16:28.764 "progress": { 00:16:28.764 "blocks": 2560, 00:16:28.764 "percent": 32 00:16:28.764 } 00:16:28.764 }, 00:16:28.764 "base_bdevs_list": [ 00:16:28.765 { 00:16:28.765 "name": "spare", 00:16:28.765 "uuid": "c424ba18-10af-5fcd-a874-f6841a1b7f92", 00:16:28.765 "is_configured": true, 00:16:28.765 "data_offset": 256, 00:16:28.765 "data_size": 7936 00:16:28.765 }, 00:16:28.765 { 00:16:28.765 "name": "BaseBdev2", 00:16:28.765 "uuid": "e8525de6-b2d6-51a0-b89f-1d14e5d09195", 00:16:28.765 "is_configured": true, 00:16:28.765 "data_offset": 256, 00:16:28.765 "data_size": 7936 00:16:28.765 } 00:16:28.765 ] 00:16:28.765 }' 00:16:28.765 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.765 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.765 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.765 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.765 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:28.765 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.765 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:28.765 [2024-11-21 04:14:28.656420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:28.765 [2024-11-21 04:14:28.707469] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:28.765 [2024-11-21 04:14:28.707568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.765 [2024-11-21 04:14:28.707586] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:28.765 [2024-11-21 04:14:28.707596] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:28.765 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.765 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:28.765 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.765 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.765 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.765 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.765 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:28.765 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.765 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.765 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.765 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.765 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.765 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.765 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.765 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:29.025 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.025 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.025 "name": "raid_bdev1", 00:16:29.025 "uuid": "404efbdd-24c4-4760-924d-6d203acadd18", 00:16:29.025 "strip_size_kb": 0, 00:16:29.025 "state": "online", 00:16:29.025 "raid_level": "raid1", 00:16:29.025 "superblock": true, 00:16:29.025 "num_base_bdevs": 2, 00:16:29.025 "num_base_bdevs_discovered": 1, 00:16:29.025 "num_base_bdevs_operational": 1, 00:16:29.025 "base_bdevs_list": [ 00:16:29.025 { 00:16:29.025 "name": null, 00:16:29.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.025 "is_configured": false, 00:16:29.025 "data_offset": 0, 00:16:29.025 "data_size": 7936 00:16:29.025 }, 00:16:29.025 { 00:16:29.025 "name": "BaseBdev2", 00:16:29.025 "uuid": "e8525de6-b2d6-51a0-b89f-1d14e5d09195", 00:16:29.025 "is_configured": true, 00:16:29.025 "data_offset": 256, 00:16:29.025 "data_size": 7936 00:16:29.025 } 00:16:29.025 ] 00:16:29.025 }' 00:16:29.025 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.025 04:14:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:29.289 04:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:29.289 04:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.289 04:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:29.289 04:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:29.289 04:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.289 04:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.290 04:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.290 04:14:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.290 04:14:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:29.290 04:14:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.290 04:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.290 "name": "raid_bdev1", 00:16:29.290 "uuid": "404efbdd-24c4-4760-924d-6d203acadd18", 00:16:29.290 "strip_size_kb": 0, 00:16:29.290 "state": "online", 00:16:29.290 "raid_level": "raid1", 00:16:29.290 "superblock": true, 00:16:29.290 "num_base_bdevs": 2, 00:16:29.290 "num_base_bdevs_discovered": 1, 00:16:29.290 "num_base_bdevs_operational": 1, 00:16:29.290 "base_bdevs_list": [ 00:16:29.290 { 00:16:29.290 "name": null, 00:16:29.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.290 "is_configured": false, 00:16:29.290 "data_offset": 0, 00:16:29.290 "data_size": 7936 00:16:29.290 }, 00:16:29.290 { 00:16:29.290 "name": "BaseBdev2", 00:16:29.290 "uuid": "e8525de6-b2d6-51a0-b89f-1d14e5d09195", 00:16:29.290 "is_configured": true, 00:16:29.290 "data_offset": 256, 00:16:29.290 "data_size": 7936 00:16:29.290 } 00:16:29.290 ] 00:16:29.290 }' 00:16:29.290 04:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.553 04:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:29.553 04:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.553 04:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:29.553 04:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:29.553 04:14:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.553 04:14:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:29.553 04:14:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.553 04:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:29.553 04:14:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.554 04:14:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:29.554 [2024-11-21 04:14:29.345699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:29.554 [2024-11-21 04:14:29.345750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.554 [2024-11-21 04:14:29.345771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:29.554 [2024-11-21 04:14:29.345782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.554 [2024-11-21 04:14:29.346210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.554 [2024-11-21 04:14:29.346246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:29.554 [2024-11-21 04:14:29.346318] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:29.554 [2024-11-21 04:14:29.346336] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:29.554 [2024-11-21 04:14:29.346345] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:29.554 [2024-11-21 04:14:29.346363] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:29.554 BaseBdev1 00:16:29.554 04:14:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.554 04:14:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:30.494 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:30.494 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.494 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.494 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.494 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.494 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:30.494 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.494 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.494 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.494 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.494 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.494 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.494 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.494 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.494 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.494 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.494 "name": "raid_bdev1", 00:16:30.494 "uuid": "404efbdd-24c4-4760-924d-6d203acadd18", 00:16:30.494 "strip_size_kb": 0, 00:16:30.494 "state": "online", 00:16:30.494 "raid_level": "raid1", 00:16:30.494 "superblock": true, 00:16:30.494 "num_base_bdevs": 2, 00:16:30.494 "num_base_bdevs_discovered": 1, 00:16:30.494 "num_base_bdevs_operational": 1, 00:16:30.494 "base_bdevs_list": [ 00:16:30.494 { 00:16:30.494 "name": null, 00:16:30.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.494 "is_configured": false, 00:16:30.494 "data_offset": 0, 00:16:30.494 "data_size": 7936 00:16:30.494 }, 00:16:30.494 { 00:16:30.494 "name": "BaseBdev2", 00:16:30.494 "uuid": "e8525de6-b2d6-51a0-b89f-1d14e5d09195", 00:16:30.494 "is_configured": true, 00:16:30.494 "data_offset": 256, 00:16:30.494 "data_size": 7936 00:16:30.494 } 00:16:30.494 ] 00:16:30.494 }' 00:16:30.494 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.494 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.073 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:31.073 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.073 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:31.073 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:31.073 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.073 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.073 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.073 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.073 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.073 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.073 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.073 "name": "raid_bdev1", 00:16:31.073 "uuid": "404efbdd-24c4-4760-924d-6d203acadd18", 00:16:31.073 "strip_size_kb": 0, 00:16:31.073 "state": "online", 00:16:31.073 "raid_level": "raid1", 00:16:31.073 "superblock": true, 00:16:31.073 "num_base_bdevs": 2, 00:16:31.073 "num_base_bdevs_discovered": 1, 00:16:31.073 "num_base_bdevs_operational": 1, 00:16:31.073 "base_bdevs_list": [ 00:16:31.073 { 00:16:31.073 "name": null, 00:16:31.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.073 "is_configured": false, 00:16:31.073 "data_offset": 0, 00:16:31.073 "data_size": 7936 00:16:31.073 }, 00:16:31.073 { 00:16:31.073 "name": "BaseBdev2", 00:16:31.073 "uuid": "e8525de6-b2d6-51a0-b89f-1d14e5d09195", 00:16:31.073 "is_configured": true, 00:16:31.073 "data_offset": 256, 00:16:31.073 "data_size": 7936 00:16:31.073 } 00:16:31.073 ] 00:16:31.073 }' 00:16:31.073 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.073 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:31.073 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.073 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:31.073 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:31.074 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:16:31.074 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:31.074 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:31.074 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:31.074 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:31.074 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:31.074 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:31.074 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.074 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.074 [2024-11-21 04:14:30.919076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.074 [2024-11-21 04:14:30.919192] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:31.074 [2024-11-21 04:14:30.919204] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:31.074 request: 00:16:31.074 { 00:16:31.074 "base_bdev": "BaseBdev1", 00:16:31.074 "raid_bdev": "raid_bdev1", 00:16:31.074 "method": "bdev_raid_add_base_bdev", 00:16:31.074 "req_id": 1 00:16:31.074 } 00:16:31.074 Got JSON-RPC error response 00:16:31.074 response: 00:16:31.074 { 00:16:31.074 "code": -22, 00:16:31.074 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:31.074 } 00:16:31.074 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:31.074 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:16:31.074 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:31.074 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:31.074 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:31.074 04:14:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:32.059 04:14:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:32.059 04:14:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.059 04:14:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.059 04:14:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.059 04:14:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.059 04:14:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:32.059 04:14:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.059 04:14:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.059 04:14:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.059 04:14:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.059 04:14:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.059 04:14:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.059 04:14:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.059 04:14:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.059 04:14:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.059 04:14:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.059 "name": "raid_bdev1", 00:16:32.059 "uuid": "404efbdd-24c4-4760-924d-6d203acadd18", 00:16:32.059 "strip_size_kb": 0, 00:16:32.059 "state": "online", 00:16:32.059 "raid_level": "raid1", 00:16:32.059 "superblock": true, 00:16:32.059 "num_base_bdevs": 2, 00:16:32.059 "num_base_bdevs_discovered": 1, 00:16:32.059 "num_base_bdevs_operational": 1, 00:16:32.059 "base_bdevs_list": [ 00:16:32.059 { 00:16:32.059 "name": null, 00:16:32.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.059 "is_configured": false, 00:16:32.059 "data_offset": 0, 00:16:32.059 "data_size": 7936 00:16:32.059 }, 00:16:32.059 { 00:16:32.059 "name": "BaseBdev2", 00:16:32.059 "uuid": "e8525de6-b2d6-51a0-b89f-1d14e5d09195", 00:16:32.059 "is_configured": true, 00:16:32.059 "data_offset": 256, 00:16:32.059 "data_size": 7936 00:16:32.059 } 00:16:32.059 ] 00:16:32.059 }' 00:16:32.059 04:14:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.059 04:14:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.629 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:32.629 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.629 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:32.629 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:32.629 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.629 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.629 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.629 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.629 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.629 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.629 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.629 "name": "raid_bdev1", 00:16:32.629 "uuid": "404efbdd-24c4-4760-924d-6d203acadd18", 00:16:32.629 "strip_size_kb": 0, 00:16:32.629 "state": "online", 00:16:32.629 "raid_level": "raid1", 00:16:32.629 "superblock": true, 00:16:32.629 "num_base_bdevs": 2, 00:16:32.629 "num_base_bdevs_discovered": 1, 00:16:32.629 "num_base_bdevs_operational": 1, 00:16:32.629 "base_bdevs_list": [ 00:16:32.629 { 00:16:32.629 "name": null, 00:16:32.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.629 "is_configured": false, 00:16:32.629 "data_offset": 0, 00:16:32.629 "data_size": 7936 00:16:32.629 }, 00:16:32.629 { 00:16:32.629 "name": "BaseBdev2", 00:16:32.629 "uuid": "e8525de6-b2d6-51a0-b89f-1d14e5d09195", 00:16:32.629 "is_configured": true, 00:16:32.629 "data_offset": 256, 00:16:32.629 "data_size": 7936 00:16:32.629 } 00:16:32.629 ] 00:16:32.629 }' 00:16:32.629 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.629 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:32.629 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.629 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:32.629 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 96922 00:16:32.629 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 96922 ']' 00:16:32.629 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 96922 00:16:32.629 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:16:32.629 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.629 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96922 00:16:32.629 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:32.629 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:32.629 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96922' 00:16:32.629 killing process with pid 96922 00:16:32.629 Received shutdown signal, test time was about 60.000000 seconds 00:16:32.629 00:16:32.629 Latency(us) 00:16:32.629 [2024-11-21T04:14:32.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.630 [2024-11-21T04:14:32.603Z] =================================================================================================================== 00:16:32.630 [2024-11-21T04:14:32.603Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:32.630 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 96922 00:16:32.630 [2024-11-21 04:14:32.595479] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:32.630 [2024-11-21 04:14:32.595582] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.630 [2024-11-21 04:14:32.595630] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.630 [2024-11-21 04:14:32.595640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:16:32.630 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 96922 00:16:32.890 [2024-11-21 04:14:32.651835] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:33.150 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:16:33.150 00:16:33.150 real 0m18.640s 00:16:33.150 user 0m24.631s 00:16:33.150 sys 0m2.727s 00:16:33.150 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:33.150 04:14:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:33.150 ************************************ 00:16:33.150 END TEST raid_rebuild_test_sb_4k 00:16:33.150 ************************************ 00:16:33.150 04:14:33 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:16:33.150 04:14:33 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:16:33.150 04:14:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:33.150 04:14:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:33.150 04:14:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:33.150 ************************************ 00:16:33.150 START TEST raid_state_function_test_sb_md_separate 00:16:33.150 ************************************ 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=97597 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97597' 00:16:33.150 Process raid pid: 97597 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 97597 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 97597 ']' 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:33.150 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.411 [2024-11-21 04:14:33.146372] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:16:33.411 [2024-11-21 04:14:33.146506] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.411 [2024-11-21 04:14:33.304321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.411 [2024-11-21 04:14:33.344306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.680 [2024-11-21 04:14:33.421447] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:33.680 [2024-11-21 04:14:33.421489] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:34.249 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:34.249 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:16:34.249 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:34.249 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.249 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.249 [2024-11-21 04:14:33.973368] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:34.249 [2024-11-21 04:14:33.973425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:34.249 [2024-11-21 04:14:33.973435] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:34.249 [2024-11-21 04:14:33.973445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:34.249 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.249 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:34.249 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.249 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.249 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.249 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.249 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:34.249 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.250 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.250 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.250 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.250 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.250 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.250 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.250 04:14:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.250 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.250 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.250 "name": "Existed_Raid", 00:16:34.250 "uuid": "0a5295f9-2c3f-4033-82a3-01934b0e3960", 00:16:34.250 "strip_size_kb": 0, 00:16:34.250 "state": "configuring", 00:16:34.250 "raid_level": "raid1", 00:16:34.250 "superblock": true, 00:16:34.250 "num_base_bdevs": 2, 00:16:34.250 "num_base_bdevs_discovered": 0, 00:16:34.250 "num_base_bdevs_operational": 2, 00:16:34.250 "base_bdevs_list": [ 00:16:34.250 { 00:16:34.250 "name": "BaseBdev1", 00:16:34.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.250 "is_configured": false, 00:16:34.250 "data_offset": 0, 00:16:34.250 "data_size": 0 00:16:34.250 }, 00:16:34.250 { 00:16:34.250 "name": "BaseBdev2", 00:16:34.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.250 "is_configured": false, 00:16:34.250 "data_offset": 0, 00:16:34.250 "data_size": 0 00:16:34.250 } 00:16:34.250 ] 00:16:34.250 }' 00:16:34.250 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.250 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.510 [2024-11-21 04:14:34.400524] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:34.510 [2024-11-21 04:14:34.400607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.510 [2024-11-21 04:14:34.412510] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:34.510 [2024-11-21 04:14:34.412601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:34.510 [2024-11-21 04:14:34.412626] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:34.510 [2024-11-21 04:14:34.412660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.510 [2024-11-21 04:14:34.440612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:34.510 BaseBdev1 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.510 [ 00:16:34.510 { 00:16:34.510 "name": "BaseBdev1", 00:16:34.510 "aliases": [ 00:16:34.510 "e0c851b0-2248-4d64-9b93-df5fe036d09f" 00:16:34.510 ], 00:16:34.510 "product_name": "Malloc disk", 00:16:34.510 "block_size": 4096, 00:16:34.510 "num_blocks": 8192, 00:16:34.510 "uuid": "e0c851b0-2248-4d64-9b93-df5fe036d09f", 00:16:34.510 "md_size": 32, 00:16:34.510 "md_interleave": false, 00:16:34.510 "dif_type": 0, 00:16:34.510 "assigned_rate_limits": { 00:16:34.510 "rw_ios_per_sec": 0, 00:16:34.510 "rw_mbytes_per_sec": 0, 00:16:34.510 "r_mbytes_per_sec": 0, 00:16:34.510 "w_mbytes_per_sec": 0 00:16:34.510 }, 00:16:34.510 "claimed": true, 00:16:34.510 "claim_type": "exclusive_write", 00:16:34.510 "zoned": false, 00:16:34.510 "supported_io_types": { 00:16:34.510 "read": true, 00:16:34.510 "write": true, 00:16:34.510 "unmap": true, 00:16:34.510 "flush": true, 00:16:34.510 "reset": true, 00:16:34.510 "nvme_admin": false, 00:16:34.510 "nvme_io": false, 00:16:34.510 "nvme_io_md": false, 00:16:34.510 "write_zeroes": true, 00:16:34.510 "zcopy": true, 00:16:34.510 "get_zone_info": false, 00:16:34.510 "zone_management": false, 00:16:34.510 "zone_append": false, 00:16:34.510 "compare": false, 00:16:34.510 "compare_and_write": false, 00:16:34.510 "abort": true, 00:16:34.510 "seek_hole": false, 00:16:34.510 "seek_data": false, 00:16:34.510 "copy": true, 00:16:34.510 "nvme_iov_md": false 00:16:34.510 }, 00:16:34.510 "memory_domains": [ 00:16:34.510 { 00:16:34.510 "dma_device_id": "system", 00:16:34.510 "dma_device_type": 1 00:16:34.510 }, 00:16:34.510 { 00:16:34.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.510 "dma_device_type": 2 00:16:34.510 } 00:16:34.510 ], 00:16:34.510 "driver_specific": {} 00:16:34.510 } 00:16:34.510 ] 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.510 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:34.770 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.770 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.770 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.770 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.770 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.770 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.770 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.770 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.770 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.770 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.770 "name": "Existed_Raid", 00:16:34.770 "uuid": "f25ca13e-0045-4935-921b-215323fa1834", 00:16:34.770 "strip_size_kb": 0, 00:16:34.770 "state": "configuring", 00:16:34.770 "raid_level": "raid1", 00:16:34.770 "superblock": true, 00:16:34.770 "num_base_bdevs": 2, 00:16:34.770 "num_base_bdevs_discovered": 1, 00:16:34.770 "num_base_bdevs_operational": 2, 00:16:34.770 "base_bdevs_list": [ 00:16:34.770 { 00:16:34.770 "name": "BaseBdev1", 00:16:34.770 "uuid": "e0c851b0-2248-4d64-9b93-df5fe036d09f", 00:16:34.770 "is_configured": true, 00:16:34.770 "data_offset": 256, 00:16:34.770 "data_size": 7936 00:16:34.770 }, 00:16:34.770 { 00:16:34.770 "name": "BaseBdev2", 00:16:34.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.770 "is_configured": false, 00:16:34.770 "data_offset": 0, 00:16:34.770 "data_size": 0 00:16:34.770 } 00:16:34.770 ] 00:16:34.770 }' 00:16:34.770 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.770 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.030 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:35.030 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.030 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.030 [2024-11-21 04:14:34.927985] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:35.030 [2024-11-21 04:14:34.928025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:16:35.030 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.030 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:35.030 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.030 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.030 [2024-11-21 04:14:34.940011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:35.030 [2024-11-21 04:14:34.942052] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:35.030 [2024-11-21 04:14:34.942093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:35.030 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.030 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:35.030 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:35.030 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:35.031 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.031 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.031 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.031 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.031 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:35.031 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.031 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.031 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.031 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.031 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.031 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.031 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.031 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.031 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.031 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.031 "name": "Existed_Raid", 00:16:35.031 "uuid": "9ca65d8a-335b-4df6-8bd7-010e4fc52183", 00:16:35.031 "strip_size_kb": 0, 00:16:35.031 "state": "configuring", 00:16:35.031 "raid_level": "raid1", 00:16:35.031 "superblock": true, 00:16:35.031 "num_base_bdevs": 2, 00:16:35.031 "num_base_bdevs_discovered": 1, 00:16:35.031 "num_base_bdevs_operational": 2, 00:16:35.031 "base_bdevs_list": [ 00:16:35.031 { 00:16:35.031 "name": "BaseBdev1", 00:16:35.031 "uuid": "e0c851b0-2248-4d64-9b93-df5fe036d09f", 00:16:35.031 "is_configured": true, 00:16:35.031 "data_offset": 256, 00:16:35.031 "data_size": 7936 00:16:35.031 }, 00:16:35.031 { 00:16:35.031 "name": "BaseBdev2", 00:16:35.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.031 "is_configured": false, 00:16:35.031 "data_offset": 0, 00:16:35.031 "data_size": 0 00:16:35.031 } 00:16:35.031 ] 00:16:35.031 }' 00:16:35.031 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.031 04:14:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.601 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:16:35.601 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.601 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.601 [2024-11-21 04:14:35.417296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.601 [2024-11-21 04:14:35.417567] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:16:35.601 [2024-11-21 04:14:35.417617] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:35.601 [2024-11-21 04:14:35.417769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:16:35.601 [2024-11-21 04:14:35.417934] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:16:35.601 [2024-11-21 04:14:35.417985] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:16:35.601 BaseBdev2 00:16:35.601 [2024-11-21 04:14:35.418140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.601 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.601 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:35.601 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:35.601 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:35.601 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:16:35.601 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:35.601 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:35.601 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:35.601 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.601 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.601 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.601 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:35.601 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.601 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.601 [ 00:16:35.601 { 00:16:35.601 "name": "BaseBdev2", 00:16:35.601 "aliases": [ 00:16:35.601 "64e6e79b-4d01-4634-a195-7ba0511a5dd2" 00:16:35.601 ], 00:16:35.601 "product_name": "Malloc disk", 00:16:35.601 "block_size": 4096, 00:16:35.601 "num_blocks": 8192, 00:16:35.601 "uuid": "64e6e79b-4d01-4634-a195-7ba0511a5dd2", 00:16:35.601 "md_size": 32, 00:16:35.601 "md_interleave": false, 00:16:35.601 "dif_type": 0, 00:16:35.601 "assigned_rate_limits": { 00:16:35.601 "rw_ios_per_sec": 0, 00:16:35.601 "rw_mbytes_per_sec": 0, 00:16:35.601 "r_mbytes_per_sec": 0, 00:16:35.601 "w_mbytes_per_sec": 0 00:16:35.601 }, 00:16:35.601 "claimed": true, 00:16:35.601 "claim_type": "exclusive_write", 00:16:35.601 "zoned": false, 00:16:35.601 "supported_io_types": { 00:16:35.601 "read": true, 00:16:35.601 "write": true, 00:16:35.601 "unmap": true, 00:16:35.601 "flush": true, 00:16:35.601 "reset": true, 00:16:35.601 "nvme_admin": false, 00:16:35.601 "nvme_io": false, 00:16:35.601 "nvme_io_md": false, 00:16:35.601 "write_zeroes": true, 00:16:35.601 "zcopy": true, 00:16:35.601 "get_zone_info": false, 00:16:35.601 "zone_management": false, 00:16:35.601 "zone_append": false, 00:16:35.601 "compare": false, 00:16:35.601 "compare_and_write": false, 00:16:35.601 "abort": true, 00:16:35.601 "seek_hole": false, 00:16:35.601 "seek_data": false, 00:16:35.601 "copy": true, 00:16:35.601 "nvme_iov_md": false 00:16:35.601 }, 00:16:35.602 "memory_domains": [ 00:16:35.602 { 00:16:35.602 "dma_device_id": "system", 00:16:35.602 "dma_device_type": 1 00:16:35.602 }, 00:16:35.602 { 00:16:35.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.602 "dma_device_type": 2 00:16:35.602 } 00:16:35.602 ], 00:16:35.602 "driver_specific": {} 00:16:35.602 } 00:16:35.602 ] 00:16:35.602 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.602 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:16:35.602 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:35.602 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:35.602 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:35.602 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.602 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.602 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.602 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.602 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:35.602 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.602 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.602 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.602 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.602 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.602 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.602 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.602 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.602 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.602 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.602 "name": "Existed_Raid", 00:16:35.602 "uuid": "9ca65d8a-335b-4df6-8bd7-010e4fc52183", 00:16:35.602 "strip_size_kb": 0, 00:16:35.602 "state": "online", 00:16:35.602 "raid_level": "raid1", 00:16:35.602 "superblock": true, 00:16:35.602 "num_base_bdevs": 2, 00:16:35.602 "num_base_bdevs_discovered": 2, 00:16:35.602 "num_base_bdevs_operational": 2, 00:16:35.602 "base_bdevs_list": [ 00:16:35.602 { 00:16:35.602 "name": "BaseBdev1", 00:16:35.602 "uuid": "e0c851b0-2248-4d64-9b93-df5fe036d09f", 00:16:35.602 "is_configured": true, 00:16:35.602 "data_offset": 256, 00:16:35.602 "data_size": 7936 00:16:35.602 }, 00:16:35.602 { 00:16:35.602 "name": "BaseBdev2", 00:16:35.602 "uuid": "64e6e79b-4d01-4634-a195-7ba0511a5dd2", 00:16:35.602 "is_configured": true, 00:16:35.602 "data_offset": 256, 00:16:35.602 "data_size": 7936 00:16:35.602 } 00:16:35.602 ] 00:16:35.602 }' 00:16:35.602 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.602 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.181 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:36.181 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:36.181 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:36.181 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:36.181 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:36.181 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:36.181 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:36.181 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:36.181 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.181 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.181 [2024-11-21 04:14:35.916709] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.181 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.181 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:36.181 "name": "Existed_Raid", 00:16:36.181 "aliases": [ 00:16:36.181 "9ca65d8a-335b-4df6-8bd7-010e4fc52183" 00:16:36.181 ], 00:16:36.181 "product_name": "Raid Volume", 00:16:36.181 "block_size": 4096, 00:16:36.181 "num_blocks": 7936, 00:16:36.181 "uuid": "9ca65d8a-335b-4df6-8bd7-010e4fc52183", 00:16:36.181 "md_size": 32, 00:16:36.181 "md_interleave": false, 00:16:36.181 "dif_type": 0, 00:16:36.181 "assigned_rate_limits": { 00:16:36.181 "rw_ios_per_sec": 0, 00:16:36.181 "rw_mbytes_per_sec": 0, 00:16:36.181 "r_mbytes_per_sec": 0, 00:16:36.181 "w_mbytes_per_sec": 0 00:16:36.181 }, 00:16:36.181 "claimed": false, 00:16:36.181 "zoned": false, 00:16:36.181 "supported_io_types": { 00:16:36.181 "read": true, 00:16:36.181 "write": true, 00:16:36.181 "unmap": false, 00:16:36.181 "flush": false, 00:16:36.181 "reset": true, 00:16:36.181 "nvme_admin": false, 00:16:36.181 "nvme_io": false, 00:16:36.181 "nvme_io_md": false, 00:16:36.181 "write_zeroes": true, 00:16:36.181 "zcopy": false, 00:16:36.181 "get_zone_info": false, 00:16:36.181 "zone_management": false, 00:16:36.181 "zone_append": false, 00:16:36.181 "compare": false, 00:16:36.181 "compare_and_write": false, 00:16:36.181 "abort": false, 00:16:36.181 "seek_hole": false, 00:16:36.181 "seek_data": false, 00:16:36.181 "copy": false, 00:16:36.181 "nvme_iov_md": false 00:16:36.181 }, 00:16:36.181 "memory_domains": [ 00:16:36.181 { 00:16:36.181 "dma_device_id": "system", 00:16:36.181 "dma_device_type": 1 00:16:36.181 }, 00:16:36.181 { 00:16:36.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.181 "dma_device_type": 2 00:16:36.181 }, 00:16:36.181 { 00:16:36.181 "dma_device_id": "system", 00:16:36.181 "dma_device_type": 1 00:16:36.181 }, 00:16:36.181 { 00:16:36.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.181 "dma_device_type": 2 00:16:36.181 } 00:16:36.181 ], 00:16:36.181 "driver_specific": { 00:16:36.181 "raid": { 00:16:36.181 "uuid": "9ca65d8a-335b-4df6-8bd7-010e4fc52183", 00:16:36.181 "strip_size_kb": 0, 00:16:36.181 "state": "online", 00:16:36.181 "raid_level": "raid1", 00:16:36.181 "superblock": true, 00:16:36.181 "num_base_bdevs": 2, 00:16:36.181 "num_base_bdevs_discovered": 2, 00:16:36.181 "num_base_bdevs_operational": 2, 00:16:36.181 "base_bdevs_list": [ 00:16:36.181 { 00:16:36.181 "name": "BaseBdev1", 00:16:36.181 "uuid": "e0c851b0-2248-4d64-9b93-df5fe036d09f", 00:16:36.181 "is_configured": true, 00:16:36.181 "data_offset": 256, 00:16:36.181 "data_size": 7936 00:16:36.181 }, 00:16:36.181 { 00:16:36.181 "name": "BaseBdev2", 00:16:36.181 "uuid": "64e6e79b-4d01-4634-a195-7ba0511a5dd2", 00:16:36.181 "is_configured": true, 00:16:36.181 "data_offset": 256, 00:16:36.181 "data_size": 7936 00:16:36.181 } 00:16:36.181 ] 00:16:36.181 } 00:16:36.181 } 00:16:36.181 }' 00:16:36.181 04:14:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:36.181 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:36.181 BaseBdev2' 00:16:36.181 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.181 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:36.181 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.181 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:36.181 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.181 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.181 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.181 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.181 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:36.181 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:36.181 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.181 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:36.181 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.181 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.181 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.182 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.441 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:36.441 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:36.441 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:36.441 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.441 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.441 [2024-11-21 04:14:36.164374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:36.441 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.442 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:36.442 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:36.442 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:36.442 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:16:36.442 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:36.442 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:36.442 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.442 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.442 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.442 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.442 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:36.442 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.442 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.442 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.442 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.442 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.442 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.442 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.442 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.442 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.442 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.442 "name": "Existed_Raid", 00:16:36.442 "uuid": "9ca65d8a-335b-4df6-8bd7-010e4fc52183", 00:16:36.442 "strip_size_kb": 0, 00:16:36.442 "state": "online", 00:16:36.442 "raid_level": "raid1", 00:16:36.442 "superblock": true, 00:16:36.442 "num_base_bdevs": 2, 00:16:36.442 "num_base_bdevs_discovered": 1, 00:16:36.442 "num_base_bdevs_operational": 1, 00:16:36.442 "base_bdevs_list": [ 00:16:36.442 { 00:16:36.442 "name": null, 00:16:36.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.442 "is_configured": false, 00:16:36.442 "data_offset": 0, 00:16:36.442 "data_size": 7936 00:16:36.442 }, 00:16:36.442 { 00:16:36.442 "name": "BaseBdev2", 00:16:36.442 "uuid": "64e6e79b-4d01-4634-a195-7ba0511a5dd2", 00:16:36.442 "is_configured": true, 00:16:36.442 "data_offset": 256, 00:16:36.442 "data_size": 7936 00:16:36.442 } 00:16:36.442 ] 00:16:36.442 }' 00:16:36.442 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.442 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.702 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:36.702 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:36.702 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:36.702 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.702 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.702 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.702 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.702 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:36.702 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:36.702 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:36.702 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.702 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.702 [2024-11-21 04:14:36.669902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:36.702 [2024-11-21 04:14:36.670069] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:36.963 [2024-11-21 04:14:36.692670] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.963 [2024-11-21 04:14:36.692794] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:36.963 [2024-11-21 04:14:36.692813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:16:36.963 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.963 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:36.963 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:36.963 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.963 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:36.963 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.963 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.963 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.963 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:36.963 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:36.963 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:36.963 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 97597 00:16:36.963 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 97597 ']' 00:16:36.963 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 97597 00:16:36.963 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:16:36.963 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:36.963 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97597 00:16:36.963 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:36.963 killing process with pid 97597 00:16:36.963 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:36.963 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97597' 00:16:36.963 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 97597 00:16:36.963 [2024-11-21 04:14:36.790664] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:36.963 04:14:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 97597 00:16:36.963 [2024-11-21 04:14:36.792212] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:37.223 04:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:16:37.223 00:16:37.223 real 0m4.074s 00:16:37.223 user 0m6.245s 00:16:37.223 sys 0m0.907s 00:16:37.223 04:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:37.223 ************************************ 00:16:37.223 END TEST raid_state_function_test_sb_md_separate 00:16:37.223 ************************************ 00:16:37.223 04:14:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.223 04:14:37 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:16:37.223 04:14:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:37.223 04:14:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:37.223 04:14:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:37.484 ************************************ 00:16:37.484 START TEST raid_superblock_test_md_separate 00:16:37.484 ************************************ 00:16:37.484 04:14:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:16:37.484 04:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:37.484 04:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:37.484 04:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:37.484 04:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:37.484 04:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:37.484 04:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:37.484 04:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:37.484 04:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:37.484 04:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:37.484 04:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:37.484 04:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:37.484 04:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:37.484 04:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:37.484 04:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:37.484 04:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:37.484 04:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=97839 00:16:37.484 04:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:37.484 04:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 97839 00:16:37.484 04:14:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 97839 ']' 00:16:37.484 04:14:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.484 04:14:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:37.484 04:14:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.484 04:14:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:37.484 04:14:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.484 [2024-11-21 04:14:37.292956] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:16:37.484 [2024-11-21 04:14:37.293176] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97839 ] 00:16:37.484 [2024-11-21 04:14:37.448534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.744 [2024-11-21 04:14:37.486277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.744 [2024-11-21 04:14:37.563446] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:37.744 [2024-11-21 04:14:37.563493] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.314 malloc1 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.314 [2024-11-21 04:14:38.151638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:38.314 [2024-11-21 04:14:38.151747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.314 [2024-11-21 04:14:38.151789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:38.314 [2024-11-21 04:14:38.151819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.314 [2024-11-21 04:14:38.154095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.314 [2024-11-21 04:14:38.154170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:38.314 pt1 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.314 malloc2 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.314 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.315 [2024-11-21 04:14:38.191554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:38.315 [2024-11-21 04:14:38.191643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.315 [2024-11-21 04:14:38.191674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:38.315 [2024-11-21 04:14:38.191703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.315 [2024-11-21 04:14:38.193942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.315 [2024-11-21 04:14:38.194015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:38.315 pt2 00:16:38.315 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.315 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:38.315 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:38.315 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:38.315 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.315 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.315 [2024-11-21 04:14:38.203574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:38.315 [2024-11-21 04:14:38.205697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:38.315 [2024-11-21 04:14:38.205850] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:16:38.315 [2024-11-21 04:14:38.205866] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:38.315 [2024-11-21 04:14:38.205944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:16:38.315 [2024-11-21 04:14:38.206067] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:16:38.315 [2024-11-21 04:14:38.206080] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:16:38.315 [2024-11-21 04:14:38.206163] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.315 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.315 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:38.315 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.315 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.315 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.315 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.315 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:38.315 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.315 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.315 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.315 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.315 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.315 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.315 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.315 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.315 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.315 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.315 "name": "raid_bdev1", 00:16:38.315 "uuid": "54df5d2f-eba1-4bae-98b4-6e6d654bed31", 00:16:38.315 "strip_size_kb": 0, 00:16:38.315 "state": "online", 00:16:38.315 "raid_level": "raid1", 00:16:38.315 "superblock": true, 00:16:38.315 "num_base_bdevs": 2, 00:16:38.315 "num_base_bdevs_discovered": 2, 00:16:38.315 "num_base_bdevs_operational": 2, 00:16:38.315 "base_bdevs_list": [ 00:16:38.315 { 00:16:38.315 "name": "pt1", 00:16:38.315 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:38.315 "is_configured": true, 00:16:38.315 "data_offset": 256, 00:16:38.315 "data_size": 7936 00:16:38.315 }, 00:16:38.315 { 00:16:38.315 "name": "pt2", 00:16:38.315 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:38.315 "is_configured": true, 00:16:38.315 "data_offset": 256, 00:16:38.315 "data_size": 7936 00:16:38.315 } 00:16:38.315 ] 00:16:38.315 }' 00:16:38.315 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.315 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.884 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:38.884 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:38.884 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:38.884 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:38.884 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:38.884 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:38.884 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:38.884 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:38.884 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.884 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.884 [2024-11-21 04:14:38.619116] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:38.884 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.884 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:38.884 "name": "raid_bdev1", 00:16:38.884 "aliases": [ 00:16:38.884 "54df5d2f-eba1-4bae-98b4-6e6d654bed31" 00:16:38.884 ], 00:16:38.884 "product_name": "Raid Volume", 00:16:38.884 "block_size": 4096, 00:16:38.884 "num_blocks": 7936, 00:16:38.884 "uuid": "54df5d2f-eba1-4bae-98b4-6e6d654bed31", 00:16:38.884 "md_size": 32, 00:16:38.884 "md_interleave": false, 00:16:38.884 "dif_type": 0, 00:16:38.884 "assigned_rate_limits": { 00:16:38.884 "rw_ios_per_sec": 0, 00:16:38.884 "rw_mbytes_per_sec": 0, 00:16:38.884 "r_mbytes_per_sec": 0, 00:16:38.884 "w_mbytes_per_sec": 0 00:16:38.884 }, 00:16:38.884 "claimed": false, 00:16:38.884 "zoned": false, 00:16:38.884 "supported_io_types": { 00:16:38.884 "read": true, 00:16:38.884 "write": true, 00:16:38.884 "unmap": false, 00:16:38.884 "flush": false, 00:16:38.884 "reset": true, 00:16:38.884 "nvme_admin": false, 00:16:38.884 "nvme_io": false, 00:16:38.884 "nvme_io_md": false, 00:16:38.884 "write_zeroes": true, 00:16:38.884 "zcopy": false, 00:16:38.884 "get_zone_info": false, 00:16:38.884 "zone_management": false, 00:16:38.884 "zone_append": false, 00:16:38.884 "compare": false, 00:16:38.884 "compare_and_write": false, 00:16:38.884 "abort": false, 00:16:38.884 "seek_hole": false, 00:16:38.884 "seek_data": false, 00:16:38.884 "copy": false, 00:16:38.884 "nvme_iov_md": false 00:16:38.884 }, 00:16:38.884 "memory_domains": [ 00:16:38.884 { 00:16:38.884 "dma_device_id": "system", 00:16:38.884 "dma_device_type": 1 00:16:38.884 }, 00:16:38.884 { 00:16:38.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.884 "dma_device_type": 2 00:16:38.884 }, 00:16:38.884 { 00:16:38.884 "dma_device_id": "system", 00:16:38.884 "dma_device_type": 1 00:16:38.884 }, 00:16:38.884 { 00:16:38.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.884 "dma_device_type": 2 00:16:38.884 } 00:16:38.884 ], 00:16:38.884 "driver_specific": { 00:16:38.884 "raid": { 00:16:38.884 "uuid": "54df5d2f-eba1-4bae-98b4-6e6d654bed31", 00:16:38.884 "strip_size_kb": 0, 00:16:38.884 "state": "online", 00:16:38.885 "raid_level": "raid1", 00:16:38.885 "superblock": true, 00:16:38.885 "num_base_bdevs": 2, 00:16:38.885 "num_base_bdevs_discovered": 2, 00:16:38.885 "num_base_bdevs_operational": 2, 00:16:38.885 "base_bdevs_list": [ 00:16:38.885 { 00:16:38.885 "name": "pt1", 00:16:38.885 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:38.885 "is_configured": true, 00:16:38.885 "data_offset": 256, 00:16:38.885 "data_size": 7936 00:16:38.885 }, 00:16:38.885 { 00:16:38.885 "name": "pt2", 00:16:38.885 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:38.885 "is_configured": true, 00:16:38.885 "data_offset": 256, 00:16:38.885 "data_size": 7936 00:16:38.885 } 00:16:38.885 ] 00:16:38.885 } 00:16:38.885 } 00:16:38.885 }' 00:16:38.885 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:38.885 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:38.885 pt2' 00:16:38.885 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.885 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:38.885 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.885 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.885 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:38.885 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.885 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.885 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.885 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:38.885 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:38.885 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.885 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:38.885 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.885 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.885 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.885 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.885 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:38.885 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:38.885 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:38.885 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.885 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.885 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:38.885 [2024-11-21 04:14:38.850628] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:39.145 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=54df5d2f-eba1-4bae-98b4-6e6d654bed31 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 54df5d2f-eba1-4bae-98b4-6e6d654bed31 ']' 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.146 [2024-11-21 04:14:38.898329] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:39.146 [2024-11-21 04:14:38.898351] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:39.146 [2024-11-21 04:14:38.898439] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.146 [2024-11-21 04:14:38.898497] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:39.146 [2024-11-21 04:14:38.898506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:39.146 04:14:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.146 [2024-11-21 04:14:39.046070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:39.146 [2024-11-21 04:14:39.048213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:39.146 [2024-11-21 04:14:39.048331] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:39.146 [2024-11-21 04:14:39.048432] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:39.146 [2024-11-21 04:14:39.048495] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:39.146 [2024-11-21 04:14:39.048541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:16:39.146 request: 00:16:39.146 { 00:16:39.146 "name": "raid_bdev1", 00:16:39.146 "raid_level": "raid1", 00:16:39.146 "base_bdevs": [ 00:16:39.146 "malloc1", 00:16:39.146 "malloc2" 00:16:39.146 ], 00:16:39.146 "superblock": false, 00:16:39.146 "method": "bdev_raid_create", 00:16:39.146 "req_id": 1 00:16:39.146 } 00:16:39.146 Got JSON-RPC error response 00:16:39.146 response: 00:16:39.146 { 00:16:39.146 "code": -17, 00:16:39.146 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:39.146 } 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.146 [2024-11-21 04:14:39.109922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:39.146 [2024-11-21 04:14:39.110026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.146 [2024-11-21 04:14:39.110061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:39.146 [2024-11-21 04:14:39.110087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.146 [2024-11-21 04:14:39.112199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.146 [2024-11-21 04:14:39.112301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:39.146 [2024-11-21 04:14:39.112363] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:39.146 [2024-11-21 04:14:39.112421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:39.146 pt1 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.146 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.409 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:39.409 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.409 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.409 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.409 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.409 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.409 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.409 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.409 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.409 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.409 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.409 "name": "raid_bdev1", 00:16:39.409 "uuid": "54df5d2f-eba1-4bae-98b4-6e6d654bed31", 00:16:39.409 "strip_size_kb": 0, 00:16:39.409 "state": "configuring", 00:16:39.409 "raid_level": "raid1", 00:16:39.409 "superblock": true, 00:16:39.409 "num_base_bdevs": 2, 00:16:39.409 "num_base_bdevs_discovered": 1, 00:16:39.409 "num_base_bdevs_operational": 2, 00:16:39.409 "base_bdevs_list": [ 00:16:39.409 { 00:16:39.409 "name": "pt1", 00:16:39.409 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:39.409 "is_configured": true, 00:16:39.409 "data_offset": 256, 00:16:39.409 "data_size": 7936 00:16:39.409 }, 00:16:39.409 { 00:16:39.409 "name": null, 00:16:39.409 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:39.409 "is_configured": false, 00:16:39.409 "data_offset": 256, 00:16:39.409 "data_size": 7936 00:16:39.409 } 00:16:39.409 ] 00:16:39.409 }' 00:16:39.409 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.409 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.669 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:39.669 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:39.669 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:39.669 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:39.669 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.669 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.669 [2024-11-21 04:14:39.573107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:39.669 [2024-11-21 04:14:39.573202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.669 [2024-11-21 04:14:39.573221] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:39.669 [2024-11-21 04:14:39.573229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.669 [2024-11-21 04:14:39.573379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.669 [2024-11-21 04:14:39.573393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:39.669 [2024-11-21 04:14:39.573428] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:39.669 [2024-11-21 04:14:39.573444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:39.669 [2024-11-21 04:14:39.573513] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:16:39.669 [2024-11-21 04:14:39.573521] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:39.669 [2024-11-21 04:14:39.573595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:16:39.669 [2024-11-21 04:14:39.573677] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:16:39.669 [2024-11-21 04:14:39.573692] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:16:39.669 [2024-11-21 04:14:39.573744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.669 pt2 00:16:39.669 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.669 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:39.669 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:39.669 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:39.669 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.669 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.669 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.669 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.669 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:39.669 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.670 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.670 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.670 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.670 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.670 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.670 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.670 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.670 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.670 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.670 "name": "raid_bdev1", 00:16:39.670 "uuid": "54df5d2f-eba1-4bae-98b4-6e6d654bed31", 00:16:39.670 "strip_size_kb": 0, 00:16:39.670 "state": "online", 00:16:39.670 "raid_level": "raid1", 00:16:39.670 "superblock": true, 00:16:39.670 "num_base_bdevs": 2, 00:16:39.670 "num_base_bdevs_discovered": 2, 00:16:39.670 "num_base_bdevs_operational": 2, 00:16:39.670 "base_bdevs_list": [ 00:16:39.670 { 00:16:39.670 "name": "pt1", 00:16:39.670 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:39.670 "is_configured": true, 00:16:39.670 "data_offset": 256, 00:16:39.670 "data_size": 7936 00:16:39.670 }, 00:16:39.670 { 00:16:39.670 "name": "pt2", 00:16:39.670 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:39.670 "is_configured": true, 00:16:39.670 "data_offset": 256, 00:16:39.670 "data_size": 7936 00:16:39.670 } 00:16:39.670 ] 00:16:39.670 }' 00:16:39.670 04:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.670 04:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.239 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:40.239 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:40.239 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:40.239 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:40.239 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:40.239 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:40.239 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:40.239 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.239 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.239 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:40.239 [2024-11-21 04:14:40.020559] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:40.239 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.239 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:40.239 "name": "raid_bdev1", 00:16:40.239 "aliases": [ 00:16:40.239 "54df5d2f-eba1-4bae-98b4-6e6d654bed31" 00:16:40.239 ], 00:16:40.239 "product_name": "Raid Volume", 00:16:40.239 "block_size": 4096, 00:16:40.239 "num_blocks": 7936, 00:16:40.239 "uuid": "54df5d2f-eba1-4bae-98b4-6e6d654bed31", 00:16:40.239 "md_size": 32, 00:16:40.239 "md_interleave": false, 00:16:40.239 "dif_type": 0, 00:16:40.239 "assigned_rate_limits": { 00:16:40.239 "rw_ios_per_sec": 0, 00:16:40.239 "rw_mbytes_per_sec": 0, 00:16:40.239 "r_mbytes_per_sec": 0, 00:16:40.239 "w_mbytes_per_sec": 0 00:16:40.239 }, 00:16:40.239 "claimed": false, 00:16:40.239 "zoned": false, 00:16:40.239 "supported_io_types": { 00:16:40.239 "read": true, 00:16:40.239 "write": true, 00:16:40.239 "unmap": false, 00:16:40.239 "flush": false, 00:16:40.239 "reset": true, 00:16:40.239 "nvme_admin": false, 00:16:40.239 "nvme_io": false, 00:16:40.239 "nvme_io_md": false, 00:16:40.239 "write_zeroes": true, 00:16:40.239 "zcopy": false, 00:16:40.239 "get_zone_info": false, 00:16:40.239 "zone_management": false, 00:16:40.239 "zone_append": false, 00:16:40.239 "compare": false, 00:16:40.239 "compare_and_write": false, 00:16:40.239 "abort": false, 00:16:40.239 "seek_hole": false, 00:16:40.239 "seek_data": false, 00:16:40.239 "copy": false, 00:16:40.239 "nvme_iov_md": false 00:16:40.239 }, 00:16:40.239 "memory_domains": [ 00:16:40.239 { 00:16:40.239 "dma_device_id": "system", 00:16:40.239 "dma_device_type": 1 00:16:40.239 }, 00:16:40.239 { 00:16:40.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.239 "dma_device_type": 2 00:16:40.239 }, 00:16:40.239 { 00:16:40.239 "dma_device_id": "system", 00:16:40.239 "dma_device_type": 1 00:16:40.239 }, 00:16:40.239 { 00:16:40.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.239 "dma_device_type": 2 00:16:40.239 } 00:16:40.239 ], 00:16:40.239 "driver_specific": { 00:16:40.239 "raid": { 00:16:40.239 "uuid": "54df5d2f-eba1-4bae-98b4-6e6d654bed31", 00:16:40.239 "strip_size_kb": 0, 00:16:40.239 "state": "online", 00:16:40.240 "raid_level": "raid1", 00:16:40.240 "superblock": true, 00:16:40.240 "num_base_bdevs": 2, 00:16:40.240 "num_base_bdevs_discovered": 2, 00:16:40.240 "num_base_bdevs_operational": 2, 00:16:40.240 "base_bdevs_list": [ 00:16:40.240 { 00:16:40.240 "name": "pt1", 00:16:40.240 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:40.240 "is_configured": true, 00:16:40.240 "data_offset": 256, 00:16:40.240 "data_size": 7936 00:16:40.240 }, 00:16:40.240 { 00:16:40.240 "name": "pt2", 00:16:40.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:40.240 "is_configured": true, 00:16:40.240 "data_offset": 256, 00:16:40.240 "data_size": 7936 00:16:40.240 } 00:16:40.240 ] 00:16:40.240 } 00:16:40.240 } 00:16:40.240 }' 00:16:40.240 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:40.240 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:40.240 pt2' 00:16:40.240 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.240 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:40.240 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:40.240 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.240 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:40.240 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.240 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.240 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.240 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:40.240 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:40.240 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:40.240 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.240 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:40.240 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.240 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.500 [2024-11-21 04:14:40.236230] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 54df5d2f-eba1-4bae-98b4-6e6d654bed31 '!=' 54df5d2f-eba1-4bae-98b4-6e6d654bed31 ']' 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.500 [2024-11-21 04:14:40.279953] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.500 "name": "raid_bdev1", 00:16:40.500 "uuid": "54df5d2f-eba1-4bae-98b4-6e6d654bed31", 00:16:40.500 "strip_size_kb": 0, 00:16:40.500 "state": "online", 00:16:40.500 "raid_level": "raid1", 00:16:40.500 "superblock": true, 00:16:40.500 "num_base_bdevs": 2, 00:16:40.500 "num_base_bdevs_discovered": 1, 00:16:40.500 "num_base_bdevs_operational": 1, 00:16:40.500 "base_bdevs_list": [ 00:16:40.500 { 00:16:40.500 "name": null, 00:16:40.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.500 "is_configured": false, 00:16:40.500 "data_offset": 0, 00:16:40.500 "data_size": 7936 00:16:40.500 }, 00:16:40.500 { 00:16:40.500 "name": "pt2", 00:16:40.500 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:40.500 "is_configured": true, 00:16:40.500 "data_offset": 256, 00:16:40.500 "data_size": 7936 00:16:40.500 } 00:16:40.500 ] 00:16:40.500 }' 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.500 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.069 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.070 [2024-11-21 04:14:40.739146] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:41.070 [2024-11-21 04:14:40.739174] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:41.070 [2024-11-21 04:14:40.739227] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.070 [2024-11-21 04:14:40.739262] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:41.070 [2024-11-21 04:14:40.739269] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.070 [2024-11-21 04:14:40.811017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:41.070 [2024-11-21 04:14:40.811066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.070 [2024-11-21 04:14:40.811084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:16:41.070 [2024-11-21 04:14:40.811091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.070 [2024-11-21 04:14:40.813212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.070 [2024-11-21 04:14:40.813259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:41.070 [2024-11-21 04:14:40.813299] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:41.070 [2024-11-21 04:14:40.813329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:41.070 [2024-11-21 04:14:40.813404] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:16:41.070 [2024-11-21 04:14:40.813419] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:41.070 [2024-11-21 04:14:40.813499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:16:41.070 [2024-11-21 04:14:40.813602] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:16:41.070 [2024-11-21 04:14:40.813621] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:16:41.070 [2024-11-21 04:14:40.813679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.070 pt2 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.070 "name": "raid_bdev1", 00:16:41.070 "uuid": "54df5d2f-eba1-4bae-98b4-6e6d654bed31", 00:16:41.070 "strip_size_kb": 0, 00:16:41.070 "state": "online", 00:16:41.070 "raid_level": "raid1", 00:16:41.070 "superblock": true, 00:16:41.070 "num_base_bdevs": 2, 00:16:41.070 "num_base_bdevs_discovered": 1, 00:16:41.070 "num_base_bdevs_operational": 1, 00:16:41.070 "base_bdevs_list": [ 00:16:41.070 { 00:16:41.070 "name": null, 00:16:41.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.070 "is_configured": false, 00:16:41.070 "data_offset": 256, 00:16:41.070 "data_size": 7936 00:16:41.070 }, 00:16:41.070 { 00:16:41.070 "name": "pt2", 00:16:41.070 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:41.070 "is_configured": true, 00:16:41.070 "data_offset": 256, 00:16:41.070 "data_size": 7936 00:16:41.070 } 00:16:41.070 ] 00:16:41.070 }' 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.070 04:14:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.331 [2024-11-21 04:14:41.222289] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:41.331 [2024-11-21 04:14:41.222312] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:41.331 [2024-11-21 04:14:41.222365] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.331 [2024-11-21 04:14:41.222396] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:41.331 [2024-11-21 04:14:41.222409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.331 [2024-11-21 04:14:41.274237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:41.331 [2024-11-21 04:14:41.274280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.331 [2024-11-21 04:14:41.274294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:16:41.331 [2024-11-21 04:14:41.274307] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.331 [2024-11-21 04:14:41.276447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.331 [2024-11-21 04:14:41.276483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:41.331 [2024-11-21 04:14:41.276520] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:41.331 [2024-11-21 04:14:41.276546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:41.331 [2024-11-21 04:14:41.276643] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:41.331 [2024-11-21 04:14:41.276664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:41.331 [2024-11-21 04:14:41.276676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:16:41.331 [2024-11-21 04:14:41.276752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:41.331 [2024-11-21 04:14:41.276818] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:16:41.331 [2024-11-21 04:14:41.276828] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:41.331 [2024-11-21 04:14:41.276883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:16:41.331 [2024-11-21 04:14:41.276960] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:16:41.331 [2024-11-21 04:14:41.276971] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:16:41.331 [2024-11-21 04:14:41.277043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.331 pt1 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.331 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.591 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.591 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.591 "name": "raid_bdev1", 00:16:41.591 "uuid": "54df5d2f-eba1-4bae-98b4-6e6d654bed31", 00:16:41.591 "strip_size_kb": 0, 00:16:41.591 "state": "online", 00:16:41.591 "raid_level": "raid1", 00:16:41.591 "superblock": true, 00:16:41.591 "num_base_bdevs": 2, 00:16:41.591 "num_base_bdevs_discovered": 1, 00:16:41.591 "num_base_bdevs_operational": 1, 00:16:41.591 "base_bdevs_list": [ 00:16:41.591 { 00:16:41.591 "name": null, 00:16:41.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.591 "is_configured": false, 00:16:41.591 "data_offset": 256, 00:16:41.591 "data_size": 7936 00:16:41.591 }, 00:16:41.591 { 00:16:41.591 "name": "pt2", 00:16:41.591 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:41.591 "is_configured": true, 00:16:41.591 "data_offset": 256, 00:16:41.591 "data_size": 7936 00:16:41.591 } 00:16:41.591 ] 00:16:41.591 }' 00:16:41.591 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.591 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.852 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:41.852 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:41.852 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.852 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.852 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.852 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:41.852 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:41.852 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.852 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:41.852 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.852 [2024-11-21 04:14:41.745630] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:41.852 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.852 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 54df5d2f-eba1-4bae-98b4-6e6d654bed31 '!=' 54df5d2f-eba1-4bae-98b4-6e6d654bed31 ']' 00:16:41.852 04:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 97839 00:16:41.852 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 97839 ']' 00:16:41.852 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 97839 00:16:41.852 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:16:41.852 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:41.852 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97839 00:16:41.852 killing process with pid 97839 00:16:41.852 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:41.852 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:41.852 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97839' 00:16:41.852 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 97839 00:16:41.852 [2024-11-21 04:14:41.811555] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:41.852 [2024-11-21 04:14:41.811614] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.852 [2024-11-21 04:14:41.811648] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:41.852 [2024-11-21 04:14:41.811655] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:16:41.852 04:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 97839 00:16:42.112 [2024-11-21 04:14:41.855660] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:42.373 04:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:16:42.373 00:16:42.373 real 0m4.978s 00:16:42.373 user 0m7.943s 00:16:42.373 sys 0m1.129s 00:16:42.373 04:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:42.373 04:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:42.373 ************************************ 00:16:42.373 END TEST raid_superblock_test_md_separate 00:16:42.373 ************************************ 00:16:42.373 04:14:42 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:16:42.373 04:14:42 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:16:42.373 04:14:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:42.373 04:14:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:42.373 04:14:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:42.373 ************************************ 00:16:42.373 START TEST raid_rebuild_test_sb_md_separate 00:16:42.373 ************************************ 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=98156 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 98156 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 98156 ']' 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:42.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:42.373 04:14:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:42.634 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:42.634 Zero copy mechanism will not be used. 00:16:42.634 [2024-11-21 04:14:42.364679] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:16:42.634 [2024-11-21 04:14:42.364817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98156 ] 00:16:42.634 [2024-11-21 04:14:42.499002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.634 [2024-11-21 04:14:42.536842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.894 [2024-11-21 04:14:42.613975] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:42.894 [2024-11-21 04:14:42.614012] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.465 BaseBdev1_malloc 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.465 [2024-11-21 04:14:43.221882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:43.465 [2024-11-21 04:14:43.221947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.465 [2024-11-21 04:14:43.221978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:43.465 [2024-11-21 04:14:43.221990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.465 [2024-11-21 04:14:43.224185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.465 [2024-11-21 04:14:43.224231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:43.465 BaseBdev1 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.465 BaseBdev2_malloc 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.465 [2024-11-21 04:14:43.257920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:43.465 [2024-11-21 04:14:43.257976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.465 [2024-11-21 04:14:43.258001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:43.465 [2024-11-21 04:14:43.258010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.465 [2024-11-21 04:14:43.260139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.465 [2024-11-21 04:14:43.260171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:43.465 BaseBdev2 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.465 spare_malloc 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.465 spare_delay 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.465 [2024-11-21 04:14:43.323021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:43.465 [2024-11-21 04:14:43.323096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.465 [2024-11-21 04:14:43.323131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:43.465 [2024-11-21 04:14:43.323146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.465 [2024-11-21 04:14:43.326163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.465 [2024-11-21 04:14:43.326202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:43.465 spare 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.465 [2024-11-21 04:14:43.335049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:43.465 [2024-11-21 04:14:43.337294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:43.465 [2024-11-21 04:14:43.337459] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:16:43.465 [2024-11-21 04:14:43.337472] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:43.465 [2024-11-21 04:14:43.337562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:16:43.465 [2024-11-21 04:14:43.337668] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:16:43.465 [2024-11-21 04:14:43.337708] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:16:43.465 [2024-11-21 04:14:43.337813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.465 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.466 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.466 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.466 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.466 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.466 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.466 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.466 "name": "raid_bdev1", 00:16:43.466 "uuid": "98543bdd-177d-4580-9f1b-d11f5b93e491", 00:16:43.466 "strip_size_kb": 0, 00:16:43.466 "state": "online", 00:16:43.466 "raid_level": "raid1", 00:16:43.466 "superblock": true, 00:16:43.466 "num_base_bdevs": 2, 00:16:43.466 "num_base_bdevs_discovered": 2, 00:16:43.466 "num_base_bdevs_operational": 2, 00:16:43.466 "base_bdevs_list": [ 00:16:43.466 { 00:16:43.466 "name": "BaseBdev1", 00:16:43.466 "uuid": "f4c513b8-ae66-5de8-a730-9750e23d5497", 00:16:43.466 "is_configured": true, 00:16:43.466 "data_offset": 256, 00:16:43.466 "data_size": 7936 00:16:43.466 }, 00:16:43.466 { 00:16:43.466 "name": "BaseBdev2", 00:16:43.466 "uuid": "b6e3e29b-29e9-5a65-87ff-0ee46ec16bc3", 00:16:43.466 "is_configured": true, 00:16:43.466 "data_offset": 256, 00:16:43.466 "data_size": 7936 00:16:43.466 } 00:16:43.466 ] 00:16:43.466 }' 00:16:43.466 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.466 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:44.035 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:44.035 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:44.035 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.035 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:44.035 [2024-11-21 04:14:43.770520] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:44.035 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.035 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:44.035 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:44.035 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.035 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.035 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:44.035 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.035 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:44.035 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:44.035 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:44.035 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:44.035 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:44.035 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:44.035 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:44.035 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:44.035 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:44.035 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:44.035 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:44.035 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:44.035 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:44.035 04:14:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:44.295 [2024-11-21 04:14:44.009888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:16:44.295 /dev/nbd0 00:16:44.295 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:44.295 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:44.295 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:44.295 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:16:44.295 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:44.295 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:44.295 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:44.295 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:16:44.295 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:44.295 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:44.295 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:44.295 1+0 records in 00:16:44.295 1+0 records out 00:16:44.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444841 s, 9.2 MB/s 00:16:44.295 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.295 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:16:44.295 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.295 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:44.295 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:16:44.295 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:44.295 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:44.295 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:44.295 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:44.295 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:44.865 7936+0 records in 00:16:44.865 7936+0 records out 00:16:44.865 32505856 bytes (33 MB, 31 MiB) copied, 0.551512 s, 58.9 MB/s 00:16:44.865 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:44.865 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:44.865 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:44.865 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:44.865 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:44.865 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:44.865 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:45.125 [2024-11-21 04:14:44.850216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:45.125 [2024-11-21 04:14:44.879886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.125 "name": "raid_bdev1", 00:16:45.125 "uuid": "98543bdd-177d-4580-9f1b-d11f5b93e491", 00:16:45.125 "strip_size_kb": 0, 00:16:45.125 "state": "online", 00:16:45.125 "raid_level": "raid1", 00:16:45.125 "superblock": true, 00:16:45.125 "num_base_bdevs": 2, 00:16:45.125 "num_base_bdevs_discovered": 1, 00:16:45.125 "num_base_bdevs_operational": 1, 00:16:45.125 "base_bdevs_list": [ 00:16:45.125 { 00:16:45.125 "name": null, 00:16:45.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.125 "is_configured": false, 00:16:45.125 "data_offset": 0, 00:16:45.125 "data_size": 7936 00:16:45.125 }, 00:16:45.125 { 00:16:45.125 "name": "BaseBdev2", 00:16:45.125 "uuid": "b6e3e29b-29e9-5a65-87ff-0ee46ec16bc3", 00:16:45.125 "is_configured": true, 00:16:45.125 "data_offset": 256, 00:16:45.125 "data_size": 7936 00:16:45.125 } 00:16:45.125 ] 00:16:45.125 }' 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.125 04:14:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:45.385 04:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:45.385 04:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.385 04:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:45.385 [2024-11-21 04:14:45.307172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:45.385 [2024-11-21 04:14:45.311559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:16:45.385 04:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.385 04:14:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:45.385 [2024-11-21 04:14:45.313797] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.767 "name": "raid_bdev1", 00:16:46.767 "uuid": "98543bdd-177d-4580-9f1b-d11f5b93e491", 00:16:46.767 "strip_size_kb": 0, 00:16:46.767 "state": "online", 00:16:46.767 "raid_level": "raid1", 00:16:46.767 "superblock": true, 00:16:46.767 "num_base_bdevs": 2, 00:16:46.767 "num_base_bdevs_discovered": 2, 00:16:46.767 "num_base_bdevs_operational": 2, 00:16:46.767 "process": { 00:16:46.767 "type": "rebuild", 00:16:46.767 "target": "spare", 00:16:46.767 "progress": { 00:16:46.767 "blocks": 2560, 00:16:46.767 "percent": 32 00:16:46.767 } 00:16:46.767 }, 00:16:46.767 "base_bdevs_list": [ 00:16:46.767 { 00:16:46.767 "name": "spare", 00:16:46.767 "uuid": "f755c28b-2a49-59ea-8c57-805853acf6ac", 00:16:46.767 "is_configured": true, 00:16:46.767 "data_offset": 256, 00:16:46.767 "data_size": 7936 00:16:46.767 }, 00:16:46.767 { 00:16:46.767 "name": "BaseBdev2", 00:16:46.767 "uuid": "b6e3e29b-29e9-5a65-87ff-0ee46ec16bc3", 00:16:46.767 "is_configured": true, 00:16:46.767 "data_offset": 256, 00:16:46.767 "data_size": 7936 00:16:46.767 } 00:16:46.767 ] 00:16:46.767 }' 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:46.767 [2024-11-21 04:14:46.454286] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:46.767 [2024-11-21 04:14:46.522155] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:46.767 [2024-11-21 04:14:46.522233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.767 [2024-11-21 04:14:46.522255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:46.767 [2024-11-21 04:14:46.522269] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.767 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.768 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.768 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.768 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.768 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.768 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:46.768 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.768 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.768 "name": "raid_bdev1", 00:16:46.768 "uuid": "98543bdd-177d-4580-9f1b-d11f5b93e491", 00:16:46.768 "strip_size_kb": 0, 00:16:46.768 "state": "online", 00:16:46.768 "raid_level": "raid1", 00:16:46.768 "superblock": true, 00:16:46.768 "num_base_bdevs": 2, 00:16:46.768 "num_base_bdevs_discovered": 1, 00:16:46.768 "num_base_bdevs_operational": 1, 00:16:46.768 "base_bdevs_list": [ 00:16:46.768 { 00:16:46.768 "name": null, 00:16:46.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.768 "is_configured": false, 00:16:46.768 "data_offset": 0, 00:16:46.768 "data_size": 7936 00:16:46.768 }, 00:16:46.768 { 00:16:46.768 "name": "BaseBdev2", 00:16:46.768 "uuid": "b6e3e29b-29e9-5a65-87ff-0ee46ec16bc3", 00:16:46.768 "is_configured": true, 00:16:46.768 "data_offset": 256, 00:16:46.768 "data_size": 7936 00:16:46.768 } 00:16:46.768 ] 00:16:46.768 }' 00:16:46.768 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.768 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:47.027 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:47.027 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.027 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:47.027 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:47.027 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.027 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.027 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.027 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.027 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:47.027 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.027 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.027 "name": "raid_bdev1", 00:16:47.027 "uuid": "98543bdd-177d-4580-9f1b-d11f5b93e491", 00:16:47.027 "strip_size_kb": 0, 00:16:47.027 "state": "online", 00:16:47.027 "raid_level": "raid1", 00:16:47.027 "superblock": true, 00:16:47.027 "num_base_bdevs": 2, 00:16:47.028 "num_base_bdevs_discovered": 1, 00:16:47.028 "num_base_bdevs_operational": 1, 00:16:47.028 "base_bdevs_list": [ 00:16:47.028 { 00:16:47.028 "name": null, 00:16:47.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.028 "is_configured": false, 00:16:47.028 "data_offset": 0, 00:16:47.028 "data_size": 7936 00:16:47.028 }, 00:16:47.028 { 00:16:47.028 "name": "BaseBdev2", 00:16:47.028 "uuid": "b6e3e29b-29e9-5a65-87ff-0ee46ec16bc3", 00:16:47.028 "is_configured": true, 00:16:47.028 "data_offset": 256, 00:16:47.028 "data_size": 7936 00:16:47.028 } 00:16:47.028 ] 00:16:47.028 }' 00:16:47.288 04:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.288 04:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:47.288 04:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.288 04:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:47.288 04:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:47.288 04:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.288 04:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:47.288 [2024-11-21 04:14:47.098234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:47.288 [2024-11-21 04:14:47.101580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ca30 00:16:47.288 [2024-11-21 04:14:47.103716] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:47.288 04:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.288 04:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:48.229 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.229 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.229 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.229 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.229 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.229 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.229 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.229 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.229 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:48.229 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.229 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.229 "name": "raid_bdev1", 00:16:48.229 "uuid": "98543bdd-177d-4580-9f1b-d11f5b93e491", 00:16:48.229 "strip_size_kb": 0, 00:16:48.229 "state": "online", 00:16:48.229 "raid_level": "raid1", 00:16:48.229 "superblock": true, 00:16:48.229 "num_base_bdevs": 2, 00:16:48.229 "num_base_bdevs_discovered": 2, 00:16:48.229 "num_base_bdevs_operational": 2, 00:16:48.229 "process": { 00:16:48.229 "type": "rebuild", 00:16:48.229 "target": "spare", 00:16:48.229 "progress": { 00:16:48.229 "blocks": 2560, 00:16:48.229 "percent": 32 00:16:48.229 } 00:16:48.229 }, 00:16:48.229 "base_bdevs_list": [ 00:16:48.229 { 00:16:48.229 "name": "spare", 00:16:48.229 "uuid": "f755c28b-2a49-59ea-8c57-805853acf6ac", 00:16:48.229 "is_configured": true, 00:16:48.229 "data_offset": 256, 00:16:48.229 "data_size": 7936 00:16:48.229 }, 00:16:48.229 { 00:16:48.229 "name": "BaseBdev2", 00:16:48.229 "uuid": "b6e3e29b-29e9-5a65-87ff-0ee46ec16bc3", 00:16:48.229 "is_configured": true, 00:16:48.229 "data_offset": 256, 00:16:48.229 "data_size": 7936 00:16:48.229 } 00:16:48.229 ] 00:16:48.229 }' 00:16:48.229 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.490 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:48.490 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.490 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:48.490 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:48.490 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:48.490 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:48.490 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:48.490 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:48.490 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:48.490 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=604 00:16:48.490 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:48.490 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.490 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.490 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.490 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.490 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.490 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.490 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.490 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:48.490 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.490 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.490 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.490 "name": "raid_bdev1", 00:16:48.490 "uuid": "98543bdd-177d-4580-9f1b-d11f5b93e491", 00:16:48.490 "strip_size_kb": 0, 00:16:48.490 "state": "online", 00:16:48.490 "raid_level": "raid1", 00:16:48.490 "superblock": true, 00:16:48.490 "num_base_bdevs": 2, 00:16:48.490 "num_base_bdevs_discovered": 2, 00:16:48.490 "num_base_bdevs_operational": 2, 00:16:48.490 "process": { 00:16:48.490 "type": "rebuild", 00:16:48.490 "target": "spare", 00:16:48.490 "progress": { 00:16:48.490 "blocks": 2816, 00:16:48.490 "percent": 35 00:16:48.490 } 00:16:48.490 }, 00:16:48.490 "base_bdevs_list": [ 00:16:48.490 { 00:16:48.490 "name": "spare", 00:16:48.490 "uuid": "f755c28b-2a49-59ea-8c57-805853acf6ac", 00:16:48.490 "is_configured": true, 00:16:48.490 "data_offset": 256, 00:16:48.490 "data_size": 7936 00:16:48.490 }, 00:16:48.490 { 00:16:48.490 "name": "BaseBdev2", 00:16:48.490 "uuid": "b6e3e29b-29e9-5a65-87ff-0ee46ec16bc3", 00:16:48.490 "is_configured": true, 00:16:48.490 "data_offset": 256, 00:16:48.490 "data_size": 7936 00:16:48.490 } 00:16:48.490 ] 00:16:48.490 }' 00:16:48.490 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.490 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:48.490 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.490 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:48.490 04:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:49.872 04:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:49.872 04:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:49.872 04:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.872 04:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:49.872 04:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:49.872 04:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.872 04:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.872 04:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.872 04:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.872 04:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:49.872 04:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.872 04:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.872 "name": "raid_bdev1", 00:16:49.872 "uuid": "98543bdd-177d-4580-9f1b-d11f5b93e491", 00:16:49.872 "strip_size_kb": 0, 00:16:49.872 "state": "online", 00:16:49.872 "raid_level": "raid1", 00:16:49.872 "superblock": true, 00:16:49.872 "num_base_bdevs": 2, 00:16:49.872 "num_base_bdevs_discovered": 2, 00:16:49.872 "num_base_bdevs_operational": 2, 00:16:49.872 "process": { 00:16:49.872 "type": "rebuild", 00:16:49.872 "target": "spare", 00:16:49.872 "progress": { 00:16:49.872 "blocks": 5888, 00:16:49.872 "percent": 74 00:16:49.872 } 00:16:49.872 }, 00:16:49.872 "base_bdevs_list": [ 00:16:49.872 { 00:16:49.872 "name": "spare", 00:16:49.872 "uuid": "f755c28b-2a49-59ea-8c57-805853acf6ac", 00:16:49.872 "is_configured": true, 00:16:49.872 "data_offset": 256, 00:16:49.872 "data_size": 7936 00:16:49.872 }, 00:16:49.872 { 00:16:49.873 "name": "BaseBdev2", 00:16:49.873 "uuid": "b6e3e29b-29e9-5a65-87ff-0ee46ec16bc3", 00:16:49.873 "is_configured": true, 00:16:49.873 "data_offset": 256, 00:16:49.873 "data_size": 7936 00:16:49.873 } 00:16:49.873 ] 00:16:49.873 }' 00:16:49.873 04:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.873 04:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:49.873 04:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.873 04:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:49.873 04:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:50.442 [2024-11-21 04:14:50.223130] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:50.442 [2024-11-21 04:14:50.223211] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:50.442 [2024-11-21 04:14:50.223333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.702 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:50.702 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.702 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.702 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.702 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.702 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.702 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.702 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.702 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.702 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:50.702 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.702 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.702 "name": "raid_bdev1", 00:16:50.702 "uuid": "98543bdd-177d-4580-9f1b-d11f5b93e491", 00:16:50.702 "strip_size_kb": 0, 00:16:50.702 "state": "online", 00:16:50.702 "raid_level": "raid1", 00:16:50.702 "superblock": true, 00:16:50.702 "num_base_bdevs": 2, 00:16:50.702 "num_base_bdevs_discovered": 2, 00:16:50.702 "num_base_bdevs_operational": 2, 00:16:50.702 "base_bdevs_list": [ 00:16:50.702 { 00:16:50.702 "name": "spare", 00:16:50.702 "uuid": "f755c28b-2a49-59ea-8c57-805853acf6ac", 00:16:50.702 "is_configured": true, 00:16:50.702 "data_offset": 256, 00:16:50.702 "data_size": 7936 00:16:50.702 }, 00:16:50.702 { 00:16:50.702 "name": "BaseBdev2", 00:16:50.702 "uuid": "b6e3e29b-29e9-5a65-87ff-0ee46ec16bc3", 00:16:50.702 "is_configured": true, 00:16:50.702 "data_offset": 256, 00:16:50.702 "data_size": 7936 00:16:50.702 } 00:16:50.702 ] 00:16:50.702 }' 00:16:50.702 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.702 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:50.702 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.966 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:50.966 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:16:50.966 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:50.966 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.966 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:50.966 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:50.966 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.966 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.967 "name": "raid_bdev1", 00:16:50.967 "uuid": "98543bdd-177d-4580-9f1b-d11f5b93e491", 00:16:50.967 "strip_size_kb": 0, 00:16:50.967 "state": "online", 00:16:50.967 "raid_level": "raid1", 00:16:50.967 "superblock": true, 00:16:50.967 "num_base_bdevs": 2, 00:16:50.967 "num_base_bdevs_discovered": 2, 00:16:50.967 "num_base_bdevs_operational": 2, 00:16:50.967 "base_bdevs_list": [ 00:16:50.967 { 00:16:50.967 "name": "spare", 00:16:50.967 "uuid": "f755c28b-2a49-59ea-8c57-805853acf6ac", 00:16:50.967 "is_configured": true, 00:16:50.967 "data_offset": 256, 00:16:50.967 "data_size": 7936 00:16:50.967 }, 00:16:50.967 { 00:16:50.967 "name": "BaseBdev2", 00:16:50.967 "uuid": "b6e3e29b-29e9-5a65-87ff-0ee46ec16bc3", 00:16:50.967 "is_configured": true, 00:16:50.967 "data_offset": 256, 00:16:50.967 "data_size": 7936 00:16:50.967 } 00:16:50.967 ] 00:16:50.967 }' 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.967 "name": "raid_bdev1", 00:16:50.967 "uuid": "98543bdd-177d-4580-9f1b-d11f5b93e491", 00:16:50.967 "strip_size_kb": 0, 00:16:50.967 "state": "online", 00:16:50.967 "raid_level": "raid1", 00:16:50.967 "superblock": true, 00:16:50.967 "num_base_bdevs": 2, 00:16:50.967 "num_base_bdevs_discovered": 2, 00:16:50.967 "num_base_bdevs_operational": 2, 00:16:50.967 "base_bdevs_list": [ 00:16:50.967 { 00:16:50.967 "name": "spare", 00:16:50.967 "uuid": "f755c28b-2a49-59ea-8c57-805853acf6ac", 00:16:50.967 "is_configured": true, 00:16:50.967 "data_offset": 256, 00:16:50.967 "data_size": 7936 00:16:50.967 }, 00:16:50.967 { 00:16:50.967 "name": "BaseBdev2", 00:16:50.967 "uuid": "b6e3e29b-29e9-5a65-87ff-0ee46ec16bc3", 00:16:50.967 "is_configured": true, 00:16:50.967 "data_offset": 256, 00:16:50.967 "data_size": 7936 00:16:50.967 } 00:16:50.967 ] 00:16:50.967 }' 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.967 04:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:51.537 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:51.537 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.537 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:51.538 [2024-11-21 04:14:51.257773] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:51.538 [2024-11-21 04:14:51.257802] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:51.538 [2024-11-21 04:14:51.257890] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.538 [2024-11-21 04:14:51.257980] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.538 [2024-11-21 04:14:51.257995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:16:51.538 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.538 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.538 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.538 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:51.538 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:16:51.538 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.538 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:51.538 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:51.538 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:51.538 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:51.538 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:51.538 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:51.538 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:51.538 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:51.538 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:51.538 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:51.538 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:51.538 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:51.538 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:51.538 /dev/nbd0 00:16:51.798 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:51.798 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:51.798 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:51.798 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:16:51.798 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:51.798 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:51.798 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:51.798 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:16:51.798 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:51.798 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:51.798 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:51.798 1+0 records in 00:16:51.798 1+0 records out 00:16:51.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411165 s, 10.0 MB/s 00:16:51.798 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:51.798 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:16:51.798 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:51.798 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:51.798 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:16:51.798 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:51.798 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:51.798 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:51.798 /dev/nbd1 00:16:52.058 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:52.059 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:52.059 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:52.059 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:16:52.059 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:52.059 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:52.059 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:52.059 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:16:52.059 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:52.059 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:52.059 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:52.059 1+0 records in 00:16:52.059 1+0 records out 00:16:52.059 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316913 s, 12.9 MB/s 00:16:52.059 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.059 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:16:52.059 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.059 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:52.059 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:16:52.059 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:52.059 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:52.059 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:52.059 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:52.059 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:52.059 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:52.059 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:52.059 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:52.059 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:52.059 04:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:52.319 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:52.319 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:52.319 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:52.319 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:52.319 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:52.319 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:52.319 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:52.319 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:52.319 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:52.319 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:52.579 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:52.579 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:52.579 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:52.579 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:52.579 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:52.580 [2024-11-21 04:14:52.325581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:52.580 [2024-11-21 04:14:52.325647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.580 [2024-11-21 04:14:52.325671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:52.580 [2024-11-21 04:14:52.325684] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.580 [2024-11-21 04:14:52.328059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.580 [2024-11-21 04:14:52.328110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:52.580 [2024-11-21 04:14:52.328173] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:52.580 [2024-11-21 04:14:52.328234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:52.580 [2024-11-21 04:14:52.328403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:52.580 spare 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:52.580 [2024-11-21 04:14:52.428299] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:16:52.580 [2024-11-21 04:14:52.428364] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:52.580 [2024-11-21 04:14:52.428479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb1b0 00:16:52.580 [2024-11-21 04:14:52.428612] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:16:52.580 [2024-11-21 04:14:52.428624] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:16:52.580 [2024-11-21 04:14:52.428716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.580 "name": "raid_bdev1", 00:16:52.580 "uuid": "98543bdd-177d-4580-9f1b-d11f5b93e491", 00:16:52.580 "strip_size_kb": 0, 00:16:52.580 "state": "online", 00:16:52.580 "raid_level": "raid1", 00:16:52.580 "superblock": true, 00:16:52.580 "num_base_bdevs": 2, 00:16:52.580 "num_base_bdevs_discovered": 2, 00:16:52.580 "num_base_bdevs_operational": 2, 00:16:52.580 "base_bdevs_list": [ 00:16:52.580 { 00:16:52.580 "name": "spare", 00:16:52.580 "uuid": "f755c28b-2a49-59ea-8c57-805853acf6ac", 00:16:52.580 "is_configured": true, 00:16:52.580 "data_offset": 256, 00:16:52.580 "data_size": 7936 00:16:52.580 }, 00:16:52.580 { 00:16:52.580 "name": "BaseBdev2", 00:16:52.580 "uuid": "b6e3e29b-29e9-5a65-87ff-0ee46ec16bc3", 00:16:52.580 "is_configured": true, 00:16:52.580 "data_offset": 256, 00:16:52.580 "data_size": 7936 00:16:52.580 } 00:16:52.580 ] 00:16:52.580 }' 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.580 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:53.151 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:53.151 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.151 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:53.151 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:53.151 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.151 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.151 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.151 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.151 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:53.151 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.151 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.151 "name": "raid_bdev1", 00:16:53.151 "uuid": "98543bdd-177d-4580-9f1b-d11f5b93e491", 00:16:53.151 "strip_size_kb": 0, 00:16:53.151 "state": "online", 00:16:53.151 "raid_level": "raid1", 00:16:53.151 "superblock": true, 00:16:53.151 "num_base_bdevs": 2, 00:16:53.151 "num_base_bdevs_discovered": 2, 00:16:53.151 "num_base_bdevs_operational": 2, 00:16:53.151 "base_bdevs_list": [ 00:16:53.151 { 00:16:53.151 "name": "spare", 00:16:53.151 "uuid": "f755c28b-2a49-59ea-8c57-805853acf6ac", 00:16:53.151 "is_configured": true, 00:16:53.151 "data_offset": 256, 00:16:53.151 "data_size": 7936 00:16:53.151 }, 00:16:53.151 { 00:16:53.151 "name": "BaseBdev2", 00:16:53.151 "uuid": "b6e3e29b-29e9-5a65-87ff-0ee46ec16bc3", 00:16:53.151 "is_configured": true, 00:16:53.151 "data_offset": 256, 00:16:53.151 "data_size": 7936 00:16:53.151 } 00:16:53.151 ] 00:16:53.151 }' 00:16:53.151 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.151 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:53.151 04:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.151 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:53.151 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.151 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.151 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:53.151 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:53.151 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.151 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:53.151 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:53.151 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.151 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:53.151 [2024-11-21 04:14:53.080366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:53.151 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.151 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:53.151 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.151 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.152 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.152 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.152 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:53.152 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.152 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.152 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.152 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.152 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.152 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.152 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.152 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:53.152 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.411 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.411 "name": "raid_bdev1", 00:16:53.411 "uuid": "98543bdd-177d-4580-9f1b-d11f5b93e491", 00:16:53.411 "strip_size_kb": 0, 00:16:53.411 "state": "online", 00:16:53.411 "raid_level": "raid1", 00:16:53.411 "superblock": true, 00:16:53.411 "num_base_bdevs": 2, 00:16:53.411 "num_base_bdevs_discovered": 1, 00:16:53.411 "num_base_bdevs_operational": 1, 00:16:53.411 "base_bdevs_list": [ 00:16:53.411 { 00:16:53.411 "name": null, 00:16:53.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.411 "is_configured": false, 00:16:53.411 "data_offset": 0, 00:16:53.411 "data_size": 7936 00:16:53.411 }, 00:16:53.411 { 00:16:53.411 "name": "BaseBdev2", 00:16:53.411 "uuid": "b6e3e29b-29e9-5a65-87ff-0ee46ec16bc3", 00:16:53.411 "is_configured": true, 00:16:53.411 "data_offset": 256, 00:16:53.411 "data_size": 7936 00:16:53.411 } 00:16:53.411 ] 00:16:53.412 }' 00:16:53.412 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.412 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:53.671 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:53.671 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.672 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:53.672 [2024-11-21 04:14:53.531761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:53.672 [2024-11-21 04:14:53.531941] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:53.672 [2024-11-21 04:14:53.531954] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:53.672 [2024-11-21 04:14:53.532000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:53.672 [2024-11-21 04:14:53.536320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb280 00:16:53.672 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.672 04:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:53.672 [2024-11-21 04:14:53.538493] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:54.611 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.611 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.611 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.611 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.611 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.611 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.611 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.611 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.611 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:54.611 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.872 "name": "raid_bdev1", 00:16:54.872 "uuid": "98543bdd-177d-4580-9f1b-d11f5b93e491", 00:16:54.872 "strip_size_kb": 0, 00:16:54.872 "state": "online", 00:16:54.872 "raid_level": "raid1", 00:16:54.872 "superblock": true, 00:16:54.872 "num_base_bdevs": 2, 00:16:54.872 "num_base_bdevs_discovered": 2, 00:16:54.872 "num_base_bdevs_operational": 2, 00:16:54.872 "process": { 00:16:54.872 "type": "rebuild", 00:16:54.872 "target": "spare", 00:16:54.872 "progress": { 00:16:54.872 "blocks": 2560, 00:16:54.872 "percent": 32 00:16:54.872 } 00:16:54.872 }, 00:16:54.872 "base_bdevs_list": [ 00:16:54.872 { 00:16:54.872 "name": "spare", 00:16:54.872 "uuid": "f755c28b-2a49-59ea-8c57-805853acf6ac", 00:16:54.872 "is_configured": true, 00:16:54.872 "data_offset": 256, 00:16:54.872 "data_size": 7936 00:16:54.872 }, 00:16:54.872 { 00:16:54.872 "name": "BaseBdev2", 00:16:54.872 "uuid": "b6e3e29b-29e9-5a65-87ff-0ee46ec16bc3", 00:16:54.872 "is_configured": true, 00:16:54.872 "data_offset": 256, 00:16:54.872 "data_size": 7936 00:16:54.872 } 00:16:54.872 ] 00:16:54.872 }' 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:54.872 [2024-11-21 04:14:54.704042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:54.872 [2024-11-21 04:14:54.746224] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:54.872 [2024-11-21 04:14:54.746297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.872 [2024-11-21 04:14:54.746316] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:54.872 [2024-11-21 04:14:54.746323] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.872 "name": "raid_bdev1", 00:16:54.872 "uuid": "98543bdd-177d-4580-9f1b-d11f5b93e491", 00:16:54.872 "strip_size_kb": 0, 00:16:54.872 "state": "online", 00:16:54.872 "raid_level": "raid1", 00:16:54.872 "superblock": true, 00:16:54.872 "num_base_bdevs": 2, 00:16:54.872 "num_base_bdevs_discovered": 1, 00:16:54.872 "num_base_bdevs_operational": 1, 00:16:54.872 "base_bdevs_list": [ 00:16:54.872 { 00:16:54.872 "name": null, 00:16:54.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.872 "is_configured": false, 00:16:54.872 "data_offset": 0, 00:16:54.872 "data_size": 7936 00:16:54.872 }, 00:16:54.872 { 00:16:54.872 "name": "BaseBdev2", 00:16:54.872 "uuid": "b6e3e29b-29e9-5a65-87ff-0ee46ec16bc3", 00:16:54.872 "is_configured": true, 00:16:54.872 "data_offset": 256, 00:16:54.872 "data_size": 7936 00:16:54.872 } 00:16:54.872 ] 00:16:54.872 }' 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.872 04:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:55.441 04:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:55.441 04:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.441 04:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:55.441 [2024-11-21 04:14:55.175053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:55.441 [2024-11-21 04:14:55.175169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.441 [2024-11-21 04:14:55.175225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:55.441 [2024-11-21 04:14:55.175265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.441 [2024-11-21 04:14:55.175575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.441 [2024-11-21 04:14:55.175628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:55.441 [2024-11-21 04:14:55.175729] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:55.441 [2024-11-21 04:14:55.175766] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:55.441 [2024-11-21 04:14:55.175835] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:55.441 [2024-11-21 04:14:55.175909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:55.441 [2024-11-21 04:14:55.179017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:16:55.441 spare 00:16:55.441 04:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.441 04:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:55.441 [2024-11-21 04:14:55.181274] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:56.381 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:56.381 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.381 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:56.381 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:56.381 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.381 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.381 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.381 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.381 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:56.381 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.381 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.381 "name": "raid_bdev1", 00:16:56.381 "uuid": "98543bdd-177d-4580-9f1b-d11f5b93e491", 00:16:56.381 "strip_size_kb": 0, 00:16:56.381 "state": "online", 00:16:56.381 "raid_level": "raid1", 00:16:56.381 "superblock": true, 00:16:56.381 "num_base_bdevs": 2, 00:16:56.381 "num_base_bdevs_discovered": 2, 00:16:56.381 "num_base_bdevs_operational": 2, 00:16:56.381 "process": { 00:16:56.381 "type": "rebuild", 00:16:56.381 "target": "spare", 00:16:56.381 "progress": { 00:16:56.381 "blocks": 2560, 00:16:56.381 "percent": 32 00:16:56.381 } 00:16:56.381 }, 00:16:56.381 "base_bdevs_list": [ 00:16:56.381 { 00:16:56.381 "name": "spare", 00:16:56.381 "uuid": "f755c28b-2a49-59ea-8c57-805853acf6ac", 00:16:56.381 "is_configured": true, 00:16:56.381 "data_offset": 256, 00:16:56.381 "data_size": 7936 00:16:56.381 }, 00:16:56.381 { 00:16:56.381 "name": "BaseBdev2", 00:16:56.382 "uuid": "b6e3e29b-29e9-5a65-87ff-0ee46ec16bc3", 00:16:56.382 "is_configured": true, 00:16:56.382 "data_offset": 256, 00:16:56.382 "data_size": 7936 00:16:56.382 } 00:16:56.382 ] 00:16:56.382 }' 00:16:56.382 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.382 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:56.382 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.382 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:56.382 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:56.382 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.382 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:56.382 [2024-11-21 04:14:56.331501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:56.642 [2024-11-21 04:14:56.388954] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:56.642 [2024-11-21 04:14:56.389064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.642 [2024-11-21 04:14:56.389081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:56.642 [2024-11-21 04:14:56.389092] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:56.642 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.642 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:56.642 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:56.642 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.642 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:56.642 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:56.642 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:56.642 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.642 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.642 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.642 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.642 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.642 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.642 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.642 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:56.642 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.642 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.642 "name": "raid_bdev1", 00:16:56.642 "uuid": "98543bdd-177d-4580-9f1b-d11f5b93e491", 00:16:56.642 "strip_size_kb": 0, 00:16:56.642 "state": "online", 00:16:56.642 "raid_level": "raid1", 00:16:56.642 "superblock": true, 00:16:56.642 "num_base_bdevs": 2, 00:16:56.642 "num_base_bdevs_discovered": 1, 00:16:56.642 "num_base_bdevs_operational": 1, 00:16:56.642 "base_bdevs_list": [ 00:16:56.642 { 00:16:56.642 "name": null, 00:16:56.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.642 "is_configured": false, 00:16:56.642 "data_offset": 0, 00:16:56.642 "data_size": 7936 00:16:56.642 }, 00:16:56.642 { 00:16:56.642 "name": "BaseBdev2", 00:16:56.642 "uuid": "b6e3e29b-29e9-5a65-87ff-0ee46ec16bc3", 00:16:56.642 "is_configured": true, 00:16:56.642 "data_offset": 256, 00:16:56.642 "data_size": 7936 00:16:56.642 } 00:16:56.642 ] 00:16:56.642 }' 00:16:56.642 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.642 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:56.902 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:56.902 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.902 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:56.902 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:56.902 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.902 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.902 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.902 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.902 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:56.902 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.902 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.902 "name": "raid_bdev1", 00:16:56.902 "uuid": "98543bdd-177d-4580-9f1b-d11f5b93e491", 00:16:56.902 "strip_size_kb": 0, 00:16:56.902 "state": "online", 00:16:56.902 "raid_level": "raid1", 00:16:56.902 "superblock": true, 00:16:56.902 "num_base_bdevs": 2, 00:16:56.902 "num_base_bdevs_discovered": 1, 00:16:56.902 "num_base_bdevs_operational": 1, 00:16:56.902 "base_bdevs_list": [ 00:16:56.902 { 00:16:56.902 "name": null, 00:16:56.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.902 "is_configured": false, 00:16:56.902 "data_offset": 0, 00:16:56.902 "data_size": 7936 00:16:56.902 }, 00:16:56.902 { 00:16:56.902 "name": "BaseBdev2", 00:16:56.902 "uuid": "b6e3e29b-29e9-5a65-87ff-0ee46ec16bc3", 00:16:56.902 "is_configured": true, 00:16:56.902 "data_offset": 256, 00:16:56.902 "data_size": 7936 00:16:56.902 } 00:16:56.902 ] 00:16:56.902 }' 00:16:56.902 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.162 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:57.162 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.162 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:57.162 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:57.162 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.162 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:57.162 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.162 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:57.162 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.162 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:57.162 [2024-11-21 04:14:56.973076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:57.162 [2024-11-21 04:14:56.973177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.162 [2024-11-21 04:14:56.973215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:57.162 [2024-11-21 04:14:56.973274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.162 [2024-11-21 04:14:56.973533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.162 [2024-11-21 04:14:56.973589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:57.162 [2024-11-21 04:14:56.973659] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:57.163 [2024-11-21 04:14:56.973681] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:57.163 [2024-11-21 04:14:56.973692] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:57.163 [2024-11-21 04:14:56.973705] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:57.163 BaseBdev1 00:16:57.163 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.163 04:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:58.102 04:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:58.102 04:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.102 04:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.102 04:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.103 04:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.103 04:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:58.103 04:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.103 04:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.103 04:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.103 04:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.103 04:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.103 04:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.103 04:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.103 04:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:58.103 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.103 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.103 "name": "raid_bdev1", 00:16:58.103 "uuid": "98543bdd-177d-4580-9f1b-d11f5b93e491", 00:16:58.103 "strip_size_kb": 0, 00:16:58.103 "state": "online", 00:16:58.103 "raid_level": "raid1", 00:16:58.103 "superblock": true, 00:16:58.103 "num_base_bdevs": 2, 00:16:58.103 "num_base_bdevs_discovered": 1, 00:16:58.103 "num_base_bdevs_operational": 1, 00:16:58.103 "base_bdevs_list": [ 00:16:58.103 { 00:16:58.103 "name": null, 00:16:58.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.103 "is_configured": false, 00:16:58.103 "data_offset": 0, 00:16:58.103 "data_size": 7936 00:16:58.103 }, 00:16:58.103 { 00:16:58.103 "name": "BaseBdev2", 00:16:58.103 "uuid": "b6e3e29b-29e9-5a65-87ff-0ee46ec16bc3", 00:16:58.103 "is_configured": true, 00:16:58.103 "data_offset": 256, 00:16:58.103 "data_size": 7936 00:16:58.103 } 00:16:58.103 ] 00:16:58.103 }' 00:16:58.103 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.103 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.672 "name": "raid_bdev1", 00:16:58.672 "uuid": "98543bdd-177d-4580-9f1b-d11f5b93e491", 00:16:58.672 "strip_size_kb": 0, 00:16:58.672 "state": "online", 00:16:58.672 "raid_level": "raid1", 00:16:58.672 "superblock": true, 00:16:58.672 "num_base_bdevs": 2, 00:16:58.672 "num_base_bdevs_discovered": 1, 00:16:58.672 "num_base_bdevs_operational": 1, 00:16:58.672 "base_bdevs_list": [ 00:16:58.672 { 00:16:58.672 "name": null, 00:16:58.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.672 "is_configured": false, 00:16:58.672 "data_offset": 0, 00:16:58.672 "data_size": 7936 00:16:58.672 }, 00:16:58.672 { 00:16:58.672 "name": "BaseBdev2", 00:16:58.672 "uuid": "b6e3e29b-29e9-5a65-87ff-0ee46ec16bc3", 00:16:58.672 "is_configured": true, 00:16:58.672 "data_offset": 256, 00:16:58.672 "data_size": 7936 00:16:58.672 } 00:16:58.672 ] 00:16:58.672 }' 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:58.672 [2024-11-21 04:14:58.554441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:58.672 [2024-11-21 04:14:58.554623] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:58.672 [2024-11-21 04:14:58.554635] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:58.672 request: 00:16:58.672 { 00:16:58.672 "base_bdev": "BaseBdev1", 00:16:58.672 "raid_bdev": "raid_bdev1", 00:16:58.672 "method": "bdev_raid_add_base_bdev", 00:16:58.672 "req_id": 1 00:16:58.672 } 00:16:58.672 Got JSON-RPC error response 00:16:58.672 response: 00:16:58.672 { 00:16:58.672 "code": -22, 00:16:58.672 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:58.672 } 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:58.672 04:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:59.611 04:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:59.611 04:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.611 04:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.611 04:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.611 04:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.611 04:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:59.611 04:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.611 04:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.611 04:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.611 04:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.611 04:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.611 04:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.611 04:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.611 04:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:59.871 04:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.871 04:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.871 "name": "raid_bdev1", 00:16:59.871 "uuid": "98543bdd-177d-4580-9f1b-d11f5b93e491", 00:16:59.871 "strip_size_kb": 0, 00:16:59.871 "state": "online", 00:16:59.871 "raid_level": "raid1", 00:16:59.871 "superblock": true, 00:16:59.871 "num_base_bdevs": 2, 00:16:59.871 "num_base_bdevs_discovered": 1, 00:16:59.871 "num_base_bdevs_operational": 1, 00:16:59.871 "base_bdevs_list": [ 00:16:59.871 { 00:16:59.871 "name": null, 00:16:59.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.871 "is_configured": false, 00:16:59.871 "data_offset": 0, 00:16:59.871 "data_size": 7936 00:16:59.871 }, 00:16:59.871 { 00:16:59.871 "name": "BaseBdev2", 00:16:59.871 "uuid": "b6e3e29b-29e9-5a65-87ff-0ee46ec16bc3", 00:16:59.871 "is_configured": true, 00:16:59.871 "data_offset": 256, 00:16:59.871 "data_size": 7936 00:16:59.871 } 00:16:59.871 ] 00:16:59.871 }' 00:16:59.871 04:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.871 04:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.132 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:00.132 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.132 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:00.132 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:00.132 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.132 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.132 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.132 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.132 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.132 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.132 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.132 "name": "raid_bdev1", 00:17:00.132 "uuid": "98543bdd-177d-4580-9f1b-d11f5b93e491", 00:17:00.132 "strip_size_kb": 0, 00:17:00.132 "state": "online", 00:17:00.132 "raid_level": "raid1", 00:17:00.132 "superblock": true, 00:17:00.132 "num_base_bdevs": 2, 00:17:00.132 "num_base_bdevs_discovered": 1, 00:17:00.132 "num_base_bdevs_operational": 1, 00:17:00.132 "base_bdevs_list": [ 00:17:00.132 { 00:17:00.132 "name": null, 00:17:00.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.132 "is_configured": false, 00:17:00.132 "data_offset": 0, 00:17:00.132 "data_size": 7936 00:17:00.132 }, 00:17:00.132 { 00:17:00.132 "name": "BaseBdev2", 00:17:00.132 "uuid": "b6e3e29b-29e9-5a65-87ff-0ee46ec16bc3", 00:17:00.132 "is_configured": true, 00:17:00.132 "data_offset": 256, 00:17:00.132 "data_size": 7936 00:17:00.132 } 00:17:00.132 ] 00:17:00.132 }' 00:17:00.132 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.132 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:00.132 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.392 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:00.392 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 98156 00:17:00.392 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 98156 ']' 00:17:00.392 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 98156 00:17:00.392 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:00.392 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.392 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98156 00:17:00.392 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.393 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.393 killing process with pid 98156 00:17:00.393 Received shutdown signal, test time was about 60.000000 seconds 00:17:00.393 00:17:00.393 Latency(us) 00:17:00.393 [2024-11-21T04:15:00.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.393 [2024-11-21T04:15:00.366Z] =================================================================================================================== 00:17:00.393 [2024-11-21T04:15:00.366Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:00.393 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98156' 00:17:00.393 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 98156 00:17:00.393 [2024-11-21 04:15:00.177197] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:00.393 [2024-11-21 04:15:00.177352] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.393 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 98156 00:17:00.393 [2024-11-21 04:15:00.177409] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.393 [2024-11-21 04:15:00.177418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:17:00.393 [2024-11-21 04:15:00.237961] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:00.653 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:17:00.653 00:17:00.653 real 0m18.286s 00:17:00.653 user 0m24.051s 00:17:00.653 sys 0m2.708s 00:17:00.653 ************************************ 00:17:00.653 END TEST raid_rebuild_test_sb_md_separate 00:17:00.653 ************************************ 00:17:00.653 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.653 04:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.653 04:15:00 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:17:00.653 04:15:00 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:17:00.653 04:15:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:00.653 04:15:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.653 04:15:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:00.913 ************************************ 00:17:00.913 START TEST raid_state_function_test_sb_md_interleaved 00:17:00.914 ************************************ 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=98839 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 98839' 00:17:00.914 Process raid pid: 98839 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 98839 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 98839 ']' 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.914 04:15:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.914 [2024-11-21 04:15:00.731626] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:17:00.914 [2024-11-21 04:15:00.731849] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.174 [2024-11-21 04:15:00.887365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.174 [2024-11-21 04:15:00.927686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.174 [2024-11-21 04:15:01.005284] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.174 [2024-11-21 04:15:01.005384] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.743 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:01.743 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:01.743 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:01.743 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.743 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.743 [2024-11-21 04:15:01.541436] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:01.743 [2024-11-21 04:15:01.541492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:01.743 [2024-11-21 04:15:01.541502] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:01.743 [2024-11-21 04:15:01.541512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:01.743 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.743 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:01.743 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.743 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.743 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.743 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.743 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:01.743 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.743 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.743 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.743 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.743 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.743 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.743 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.743 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.743 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.743 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.743 "name": "Existed_Raid", 00:17:01.743 "uuid": "81ee4de3-7de9-4618-95aa-7367dd309537", 00:17:01.743 "strip_size_kb": 0, 00:17:01.743 "state": "configuring", 00:17:01.743 "raid_level": "raid1", 00:17:01.743 "superblock": true, 00:17:01.743 "num_base_bdevs": 2, 00:17:01.743 "num_base_bdevs_discovered": 0, 00:17:01.743 "num_base_bdevs_operational": 2, 00:17:01.743 "base_bdevs_list": [ 00:17:01.743 { 00:17:01.743 "name": "BaseBdev1", 00:17:01.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.743 "is_configured": false, 00:17:01.743 "data_offset": 0, 00:17:01.743 "data_size": 0 00:17:01.743 }, 00:17:01.743 { 00:17:01.743 "name": "BaseBdev2", 00:17:01.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.743 "is_configured": false, 00:17:01.743 "data_offset": 0, 00:17:01.743 "data_size": 0 00:17:01.743 } 00:17:01.743 ] 00:17:01.743 }' 00:17:01.743 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.743 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.314 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:02.314 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.314 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.314 [2024-11-21 04:15:01.992622] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:02.314 [2024-11-21 04:15:01.992712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:17:02.314 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.314 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:02.314 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.314 04:15:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.314 [2024-11-21 04:15:02.004613] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:02.314 [2024-11-21 04:15:02.004692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:02.314 [2024-11-21 04:15:02.004718] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:02.314 [2024-11-21 04:15:02.004755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.314 [2024-11-21 04:15:02.032140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:02.314 BaseBdev1 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.314 [ 00:17:02.314 { 00:17:02.314 "name": "BaseBdev1", 00:17:02.314 "aliases": [ 00:17:02.314 "1fcfd8d4-6489-4dfd-a9c4-76fa4610d2ef" 00:17:02.314 ], 00:17:02.314 "product_name": "Malloc disk", 00:17:02.314 "block_size": 4128, 00:17:02.314 "num_blocks": 8192, 00:17:02.314 "uuid": "1fcfd8d4-6489-4dfd-a9c4-76fa4610d2ef", 00:17:02.314 "md_size": 32, 00:17:02.314 "md_interleave": true, 00:17:02.314 "dif_type": 0, 00:17:02.314 "assigned_rate_limits": { 00:17:02.314 "rw_ios_per_sec": 0, 00:17:02.314 "rw_mbytes_per_sec": 0, 00:17:02.314 "r_mbytes_per_sec": 0, 00:17:02.314 "w_mbytes_per_sec": 0 00:17:02.314 }, 00:17:02.314 "claimed": true, 00:17:02.314 "claim_type": "exclusive_write", 00:17:02.314 "zoned": false, 00:17:02.314 "supported_io_types": { 00:17:02.314 "read": true, 00:17:02.314 "write": true, 00:17:02.314 "unmap": true, 00:17:02.314 "flush": true, 00:17:02.314 "reset": true, 00:17:02.314 "nvme_admin": false, 00:17:02.314 "nvme_io": false, 00:17:02.314 "nvme_io_md": false, 00:17:02.314 "write_zeroes": true, 00:17:02.314 "zcopy": true, 00:17:02.314 "get_zone_info": false, 00:17:02.314 "zone_management": false, 00:17:02.314 "zone_append": false, 00:17:02.314 "compare": false, 00:17:02.314 "compare_and_write": false, 00:17:02.314 "abort": true, 00:17:02.314 "seek_hole": false, 00:17:02.314 "seek_data": false, 00:17:02.314 "copy": true, 00:17:02.314 "nvme_iov_md": false 00:17:02.314 }, 00:17:02.314 "memory_domains": [ 00:17:02.314 { 00:17:02.314 "dma_device_id": "system", 00:17:02.314 "dma_device_type": 1 00:17:02.314 }, 00:17:02.314 { 00:17:02.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.314 "dma_device_type": 2 00:17:02.314 } 00:17:02.314 ], 00:17:02.314 "driver_specific": {} 00:17:02.314 } 00:17:02.314 ] 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.314 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.314 "name": "Existed_Raid", 00:17:02.314 "uuid": "33090e69-504a-4288-a89f-66b831467dd9", 00:17:02.314 "strip_size_kb": 0, 00:17:02.314 "state": "configuring", 00:17:02.314 "raid_level": "raid1", 00:17:02.314 "superblock": true, 00:17:02.314 "num_base_bdevs": 2, 00:17:02.314 "num_base_bdevs_discovered": 1, 00:17:02.315 "num_base_bdevs_operational": 2, 00:17:02.315 "base_bdevs_list": [ 00:17:02.315 { 00:17:02.315 "name": "BaseBdev1", 00:17:02.315 "uuid": "1fcfd8d4-6489-4dfd-a9c4-76fa4610d2ef", 00:17:02.315 "is_configured": true, 00:17:02.315 "data_offset": 256, 00:17:02.315 "data_size": 7936 00:17:02.315 }, 00:17:02.315 { 00:17:02.315 "name": "BaseBdev2", 00:17:02.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.315 "is_configured": false, 00:17:02.315 "data_offset": 0, 00:17:02.315 "data_size": 0 00:17:02.315 } 00:17:02.315 ] 00:17:02.315 }' 00:17:02.315 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.315 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.575 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:02.575 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.575 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.575 [2024-11-21 04:15:02.491407] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:02.575 [2024-11-21 04:15:02.491452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:17:02.575 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.575 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:02.575 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.575 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.575 [2024-11-21 04:15:02.503433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:02.575 [2024-11-21 04:15:02.505608] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:02.575 [2024-11-21 04:15:02.505690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:02.575 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.575 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:02.575 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:02.575 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:02.575 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.575 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.575 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.575 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.575 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.575 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.575 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.575 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.575 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.575 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.575 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.575 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.575 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.575 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.835 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.835 "name": "Existed_Raid", 00:17:02.835 "uuid": "3baa77ed-c4d5-4611-89d3-7e694dcc1512", 00:17:02.835 "strip_size_kb": 0, 00:17:02.835 "state": "configuring", 00:17:02.835 "raid_level": "raid1", 00:17:02.835 "superblock": true, 00:17:02.835 "num_base_bdevs": 2, 00:17:02.835 "num_base_bdevs_discovered": 1, 00:17:02.835 "num_base_bdevs_operational": 2, 00:17:02.835 "base_bdevs_list": [ 00:17:02.835 { 00:17:02.835 "name": "BaseBdev1", 00:17:02.835 "uuid": "1fcfd8d4-6489-4dfd-a9c4-76fa4610d2ef", 00:17:02.835 "is_configured": true, 00:17:02.835 "data_offset": 256, 00:17:02.835 "data_size": 7936 00:17:02.835 }, 00:17:02.835 { 00:17:02.835 "name": "BaseBdev2", 00:17:02.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.835 "is_configured": false, 00:17:02.835 "data_offset": 0, 00:17:02.835 "data_size": 0 00:17:02.835 } 00:17:02.835 ] 00:17:02.835 }' 00:17:02.835 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.835 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.095 [2024-11-21 04:15:02.947933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:03.095 [2024-11-21 04:15:02.948275] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:17:03.095 [2024-11-21 04:15:02.948344] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:03.095 [2024-11-21 04:15:02.948507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:17:03.095 [2024-11-21 04:15:02.948645] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:17:03.095 [2024-11-21 04:15:02.948695] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:17:03.095 BaseBdev2 00:17:03.095 [2024-11-21 04:15:02.948818] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.095 [ 00:17:03.095 { 00:17:03.095 "name": "BaseBdev2", 00:17:03.095 "aliases": [ 00:17:03.095 "293ae53b-606b-48a6-a0e7-43a00188e514" 00:17:03.095 ], 00:17:03.095 "product_name": "Malloc disk", 00:17:03.095 "block_size": 4128, 00:17:03.095 "num_blocks": 8192, 00:17:03.095 "uuid": "293ae53b-606b-48a6-a0e7-43a00188e514", 00:17:03.095 "md_size": 32, 00:17:03.095 "md_interleave": true, 00:17:03.095 "dif_type": 0, 00:17:03.095 "assigned_rate_limits": { 00:17:03.095 "rw_ios_per_sec": 0, 00:17:03.095 "rw_mbytes_per_sec": 0, 00:17:03.095 "r_mbytes_per_sec": 0, 00:17:03.095 "w_mbytes_per_sec": 0 00:17:03.095 }, 00:17:03.095 "claimed": true, 00:17:03.095 "claim_type": "exclusive_write", 00:17:03.095 "zoned": false, 00:17:03.095 "supported_io_types": { 00:17:03.095 "read": true, 00:17:03.095 "write": true, 00:17:03.095 "unmap": true, 00:17:03.095 "flush": true, 00:17:03.095 "reset": true, 00:17:03.095 "nvme_admin": false, 00:17:03.095 "nvme_io": false, 00:17:03.095 "nvme_io_md": false, 00:17:03.095 "write_zeroes": true, 00:17:03.095 "zcopy": true, 00:17:03.095 "get_zone_info": false, 00:17:03.095 "zone_management": false, 00:17:03.095 "zone_append": false, 00:17:03.095 "compare": false, 00:17:03.095 "compare_and_write": false, 00:17:03.095 "abort": true, 00:17:03.095 "seek_hole": false, 00:17:03.095 "seek_data": false, 00:17:03.095 "copy": true, 00:17:03.095 "nvme_iov_md": false 00:17:03.095 }, 00:17:03.095 "memory_domains": [ 00:17:03.095 { 00:17:03.095 "dma_device_id": "system", 00:17:03.095 "dma_device_type": 1 00:17:03.095 }, 00:17:03.095 { 00:17:03.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.095 "dma_device_type": 2 00:17:03.095 } 00:17:03.095 ], 00:17:03.095 "driver_specific": {} 00:17:03.095 } 00:17:03.095 ] 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.095 04:15:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.095 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.095 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.096 "name": "Existed_Raid", 00:17:03.096 "uuid": "3baa77ed-c4d5-4611-89d3-7e694dcc1512", 00:17:03.096 "strip_size_kb": 0, 00:17:03.096 "state": "online", 00:17:03.096 "raid_level": "raid1", 00:17:03.096 "superblock": true, 00:17:03.096 "num_base_bdevs": 2, 00:17:03.096 "num_base_bdevs_discovered": 2, 00:17:03.096 "num_base_bdevs_operational": 2, 00:17:03.096 "base_bdevs_list": [ 00:17:03.096 { 00:17:03.096 "name": "BaseBdev1", 00:17:03.096 "uuid": "1fcfd8d4-6489-4dfd-a9c4-76fa4610d2ef", 00:17:03.096 "is_configured": true, 00:17:03.096 "data_offset": 256, 00:17:03.096 "data_size": 7936 00:17:03.096 }, 00:17:03.096 { 00:17:03.096 "name": "BaseBdev2", 00:17:03.096 "uuid": "293ae53b-606b-48a6-a0e7-43a00188e514", 00:17:03.096 "is_configured": true, 00:17:03.096 "data_offset": 256, 00:17:03.096 "data_size": 7936 00:17:03.096 } 00:17:03.096 ] 00:17:03.096 }' 00:17:03.096 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.096 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.665 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:03.665 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:03.665 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:03.665 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:03.665 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:03.665 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:03.665 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:03.665 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.665 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.665 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:03.665 [2024-11-21 04:15:03.467370] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.665 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.665 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:03.665 "name": "Existed_Raid", 00:17:03.665 "aliases": [ 00:17:03.665 "3baa77ed-c4d5-4611-89d3-7e694dcc1512" 00:17:03.665 ], 00:17:03.665 "product_name": "Raid Volume", 00:17:03.665 "block_size": 4128, 00:17:03.665 "num_blocks": 7936, 00:17:03.665 "uuid": "3baa77ed-c4d5-4611-89d3-7e694dcc1512", 00:17:03.665 "md_size": 32, 00:17:03.665 "md_interleave": true, 00:17:03.665 "dif_type": 0, 00:17:03.665 "assigned_rate_limits": { 00:17:03.665 "rw_ios_per_sec": 0, 00:17:03.665 "rw_mbytes_per_sec": 0, 00:17:03.665 "r_mbytes_per_sec": 0, 00:17:03.665 "w_mbytes_per_sec": 0 00:17:03.665 }, 00:17:03.665 "claimed": false, 00:17:03.665 "zoned": false, 00:17:03.665 "supported_io_types": { 00:17:03.665 "read": true, 00:17:03.665 "write": true, 00:17:03.665 "unmap": false, 00:17:03.665 "flush": false, 00:17:03.665 "reset": true, 00:17:03.665 "nvme_admin": false, 00:17:03.665 "nvme_io": false, 00:17:03.665 "nvme_io_md": false, 00:17:03.665 "write_zeroes": true, 00:17:03.665 "zcopy": false, 00:17:03.665 "get_zone_info": false, 00:17:03.665 "zone_management": false, 00:17:03.665 "zone_append": false, 00:17:03.665 "compare": false, 00:17:03.665 "compare_and_write": false, 00:17:03.666 "abort": false, 00:17:03.666 "seek_hole": false, 00:17:03.666 "seek_data": false, 00:17:03.666 "copy": false, 00:17:03.666 "nvme_iov_md": false 00:17:03.666 }, 00:17:03.666 "memory_domains": [ 00:17:03.666 { 00:17:03.666 "dma_device_id": "system", 00:17:03.666 "dma_device_type": 1 00:17:03.666 }, 00:17:03.666 { 00:17:03.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.666 "dma_device_type": 2 00:17:03.666 }, 00:17:03.666 { 00:17:03.666 "dma_device_id": "system", 00:17:03.666 "dma_device_type": 1 00:17:03.666 }, 00:17:03.666 { 00:17:03.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.666 "dma_device_type": 2 00:17:03.666 } 00:17:03.666 ], 00:17:03.666 "driver_specific": { 00:17:03.666 "raid": { 00:17:03.666 "uuid": "3baa77ed-c4d5-4611-89d3-7e694dcc1512", 00:17:03.666 "strip_size_kb": 0, 00:17:03.666 "state": "online", 00:17:03.666 "raid_level": "raid1", 00:17:03.666 "superblock": true, 00:17:03.666 "num_base_bdevs": 2, 00:17:03.666 "num_base_bdevs_discovered": 2, 00:17:03.666 "num_base_bdevs_operational": 2, 00:17:03.666 "base_bdevs_list": [ 00:17:03.666 { 00:17:03.666 "name": "BaseBdev1", 00:17:03.666 "uuid": "1fcfd8d4-6489-4dfd-a9c4-76fa4610d2ef", 00:17:03.666 "is_configured": true, 00:17:03.666 "data_offset": 256, 00:17:03.666 "data_size": 7936 00:17:03.666 }, 00:17:03.666 { 00:17:03.666 "name": "BaseBdev2", 00:17:03.666 "uuid": "293ae53b-606b-48a6-a0e7-43a00188e514", 00:17:03.666 "is_configured": true, 00:17:03.666 "data_offset": 256, 00:17:03.666 "data_size": 7936 00:17:03.666 } 00:17:03.666 ] 00:17:03.666 } 00:17:03.666 } 00:17:03.666 }' 00:17:03.666 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:03.666 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:03.666 BaseBdev2' 00:17:03.666 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.666 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:03.666 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.666 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:03.666 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.666 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.666 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.666 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.926 [2024-11-21 04:15:03.710758] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.926 "name": "Existed_Raid", 00:17:03.926 "uuid": "3baa77ed-c4d5-4611-89d3-7e694dcc1512", 00:17:03.926 "strip_size_kb": 0, 00:17:03.926 "state": "online", 00:17:03.926 "raid_level": "raid1", 00:17:03.926 "superblock": true, 00:17:03.926 "num_base_bdevs": 2, 00:17:03.926 "num_base_bdevs_discovered": 1, 00:17:03.926 "num_base_bdevs_operational": 1, 00:17:03.926 "base_bdevs_list": [ 00:17:03.926 { 00:17:03.926 "name": null, 00:17:03.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.926 "is_configured": false, 00:17:03.926 "data_offset": 0, 00:17:03.926 "data_size": 7936 00:17:03.926 }, 00:17:03.926 { 00:17:03.926 "name": "BaseBdev2", 00:17:03.926 "uuid": "293ae53b-606b-48a6-a0e7-43a00188e514", 00:17:03.926 "is_configured": true, 00:17:03.926 "data_offset": 256, 00:17:03.926 "data_size": 7936 00:17:03.926 } 00:17:03.926 ] 00:17:03.926 }' 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.926 04:15:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.497 [2024-11-21 04:15:04.219637] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:04.497 [2024-11-21 04:15:04.219746] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:04.497 [2024-11-21 04:15:04.241757] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.497 [2024-11-21 04:15:04.241890] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.497 [2024-11-21 04:15:04.241909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 98839 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 98839 ']' 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 98839 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98839 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:04.497 killing process with pid 98839 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98839' 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 98839 00:17:04.497 [2024-11-21 04:15:04.342075] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:04.497 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 98839 00:17:04.497 [2024-11-21 04:15:04.343690] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:04.757 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:17:04.757 00:17:04.757 real 0m4.038s 00:17:04.757 user 0m6.205s 00:17:04.757 sys 0m0.885s 00:17:04.757 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:04.757 04:15:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.757 ************************************ 00:17:04.757 END TEST raid_state_function_test_sb_md_interleaved 00:17:04.757 ************************************ 00:17:05.018 04:15:04 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:17:05.018 04:15:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:05.018 04:15:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:05.018 04:15:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:05.018 ************************************ 00:17:05.018 START TEST raid_superblock_test_md_interleaved 00:17:05.018 ************************************ 00:17:05.018 04:15:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:05.018 04:15:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:05.018 04:15:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:05.018 04:15:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:05.018 04:15:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:05.018 04:15:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:05.018 04:15:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:05.018 04:15:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:05.018 04:15:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:05.018 04:15:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:05.018 04:15:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:05.018 04:15:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:05.018 04:15:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:05.018 04:15:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:05.018 04:15:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:05.018 04:15:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:05.019 04:15:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=99080 00:17:05.019 04:15:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:05.019 04:15:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 99080 00:17:05.019 04:15:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 99080 ']' 00:17:05.019 04:15:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.019 04:15:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.019 04:15:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.019 04:15:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.019 04:15:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.019 [2024-11-21 04:15:04.839701] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:17:05.019 [2024-11-21 04:15:04.839820] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99080 ] 00:17:05.019 [2024-11-21 04:15:04.971566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.279 [2024-11-21 04:15:05.010489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.279 [2024-11-21 04:15:05.087457] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.279 [2024-11-21 04:15:05.087505] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.850 malloc1 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.850 [2024-11-21 04:15:05.691525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:05.850 [2024-11-21 04:15:05.691657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.850 [2024-11-21 04:15:05.691705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:17:05.850 [2024-11-21 04:15:05.691780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.850 [2024-11-21 04:15:05.694020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.850 [2024-11-21 04:15:05.694097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:05.850 pt1 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.850 malloc2 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.850 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.850 [2024-11-21 04:15:05.730800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:05.850 [2024-11-21 04:15:05.730856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.850 [2024-11-21 04:15:05.730874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:05.850 [2024-11-21 04:15:05.730885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.850 [2024-11-21 04:15:05.733064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.850 [2024-11-21 04:15:05.733100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:05.850 pt2 00:17:05.851 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.851 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:05.851 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:05.851 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:05.851 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.851 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.851 [2024-11-21 04:15:05.742817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:05.851 [2024-11-21 04:15:05.744976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:05.851 [2024-11-21 04:15:05.745204] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:17:05.851 [2024-11-21 04:15:05.745226] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:05.851 [2024-11-21 04:15:05.745340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:17:05.851 [2024-11-21 04:15:05.745407] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:17:05.851 [2024-11-21 04:15:05.745422] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:17:05.851 [2024-11-21 04:15:05.745490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.851 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.851 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:05.851 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.851 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.851 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.851 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.851 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:05.851 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.851 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.851 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.851 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.851 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.851 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.851 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.851 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.851 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.851 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.851 "name": "raid_bdev1", 00:17:05.851 "uuid": "40dc8f55-9b7e-4b21-a4a0-e4368ba6e242", 00:17:05.851 "strip_size_kb": 0, 00:17:05.851 "state": "online", 00:17:05.851 "raid_level": "raid1", 00:17:05.851 "superblock": true, 00:17:05.851 "num_base_bdevs": 2, 00:17:05.851 "num_base_bdevs_discovered": 2, 00:17:05.851 "num_base_bdevs_operational": 2, 00:17:05.851 "base_bdevs_list": [ 00:17:05.851 { 00:17:05.851 "name": "pt1", 00:17:05.851 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:05.851 "is_configured": true, 00:17:05.851 "data_offset": 256, 00:17:05.851 "data_size": 7936 00:17:05.851 }, 00:17:05.851 { 00:17:05.851 "name": "pt2", 00:17:05.851 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:05.851 "is_configured": true, 00:17:05.851 "data_offset": 256, 00:17:05.851 "data_size": 7936 00:17:05.851 } 00:17:05.851 ] 00:17:05.851 }' 00:17:05.851 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.851 04:15:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.420 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:06.420 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:06.420 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:06.420 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:06.420 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:06.420 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:06.420 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:06.420 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:06.420 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.420 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.420 [2024-11-21 04:15:06.210300] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:06.420 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.420 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:06.420 "name": "raid_bdev1", 00:17:06.420 "aliases": [ 00:17:06.420 "40dc8f55-9b7e-4b21-a4a0-e4368ba6e242" 00:17:06.420 ], 00:17:06.420 "product_name": "Raid Volume", 00:17:06.420 "block_size": 4128, 00:17:06.420 "num_blocks": 7936, 00:17:06.420 "uuid": "40dc8f55-9b7e-4b21-a4a0-e4368ba6e242", 00:17:06.420 "md_size": 32, 00:17:06.421 "md_interleave": true, 00:17:06.421 "dif_type": 0, 00:17:06.421 "assigned_rate_limits": { 00:17:06.421 "rw_ios_per_sec": 0, 00:17:06.421 "rw_mbytes_per_sec": 0, 00:17:06.421 "r_mbytes_per_sec": 0, 00:17:06.421 "w_mbytes_per_sec": 0 00:17:06.421 }, 00:17:06.421 "claimed": false, 00:17:06.421 "zoned": false, 00:17:06.421 "supported_io_types": { 00:17:06.421 "read": true, 00:17:06.421 "write": true, 00:17:06.421 "unmap": false, 00:17:06.421 "flush": false, 00:17:06.421 "reset": true, 00:17:06.421 "nvme_admin": false, 00:17:06.421 "nvme_io": false, 00:17:06.421 "nvme_io_md": false, 00:17:06.421 "write_zeroes": true, 00:17:06.421 "zcopy": false, 00:17:06.421 "get_zone_info": false, 00:17:06.421 "zone_management": false, 00:17:06.421 "zone_append": false, 00:17:06.421 "compare": false, 00:17:06.421 "compare_and_write": false, 00:17:06.421 "abort": false, 00:17:06.421 "seek_hole": false, 00:17:06.421 "seek_data": false, 00:17:06.421 "copy": false, 00:17:06.421 "nvme_iov_md": false 00:17:06.421 }, 00:17:06.421 "memory_domains": [ 00:17:06.421 { 00:17:06.421 "dma_device_id": "system", 00:17:06.421 "dma_device_type": 1 00:17:06.421 }, 00:17:06.421 { 00:17:06.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.421 "dma_device_type": 2 00:17:06.421 }, 00:17:06.421 { 00:17:06.421 "dma_device_id": "system", 00:17:06.421 "dma_device_type": 1 00:17:06.421 }, 00:17:06.421 { 00:17:06.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.421 "dma_device_type": 2 00:17:06.421 } 00:17:06.421 ], 00:17:06.421 "driver_specific": { 00:17:06.421 "raid": { 00:17:06.421 "uuid": "40dc8f55-9b7e-4b21-a4a0-e4368ba6e242", 00:17:06.421 "strip_size_kb": 0, 00:17:06.421 "state": "online", 00:17:06.421 "raid_level": "raid1", 00:17:06.421 "superblock": true, 00:17:06.421 "num_base_bdevs": 2, 00:17:06.421 "num_base_bdevs_discovered": 2, 00:17:06.421 "num_base_bdevs_operational": 2, 00:17:06.421 "base_bdevs_list": [ 00:17:06.421 { 00:17:06.421 "name": "pt1", 00:17:06.421 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:06.421 "is_configured": true, 00:17:06.421 "data_offset": 256, 00:17:06.421 "data_size": 7936 00:17:06.421 }, 00:17:06.421 { 00:17:06.421 "name": "pt2", 00:17:06.421 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:06.421 "is_configured": true, 00:17:06.421 "data_offset": 256, 00:17:06.421 "data_size": 7936 00:17:06.421 } 00:17:06.421 ] 00:17:06.421 } 00:17:06.421 } 00:17:06.421 }' 00:17:06.421 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:06.421 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:06.421 pt2' 00:17:06.421 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:06.421 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:06.421 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:06.421 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:06.421 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:06.421 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.421 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.421 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.421 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:06.421 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:06.421 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:06.421 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:06.421 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.421 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.421 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:06.421 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.682 [2024-11-21 04:15:06.433797] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=40dc8f55-9b7e-4b21-a4a0-e4368ba6e242 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 40dc8f55-9b7e-4b21-a4a0-e4368ba6e242 ']' 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.682 [2024-11-21 04:15:06.477507] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:06.682 [2024-11-21 04:15:06.477531] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:06.682 [2024-11-21 04:15:06.477605] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:06.682 [2024-11-21 04:15:06.477673] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:06.682 [2024-11-21 04:15:06.477683] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.682 [2024-11-21 04:15:06.617285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:06.682 [2024-11-21 04:15:06.619351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:06.682 [2024-11-21 04:15:06.619407] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:06.682 [2024-11-21 04:15:06.619456] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:06.682 [2024-11-21 04:15:06.619473] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:06.682 [2024-11-21 04:15:06.619481] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:17:06.682 request: 00:17:06.682 { 00:17:06.682 "name": "raid_bdev1", 00:17:06.682 "raid_level": "raid1", 00:17:06.682 "base_bdevs": [ 00:17:06.682 "malloc1", 00:17:06.682 "malloc2" 00:17:06.682 ], 00:17:06.682 "superblock": false, 00:17:06.682 "method": "bdev_raid_create", 00:17:06.682 "req_id": 1 00:17:06.682 } 00:17:06.682 Got JSON-RPC error response 00:17:06.682 response: 00:17:06.682 { 00:17:06.682 "code": -17, 00:17:06.682 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:06.682 } 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.682 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.942 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:06.942 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:06.942 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:06.942 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.942 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.942 [2024-11-21 04:15:06.685126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:06.942 [2024-11-21 04:15:06.685230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.942 [2024-11-21 04:15:06.685275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:06.942 [2024-11-21 04:15:06.685304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.942 [2024-11-21 04:15:06.687484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.942 [2024-11-21 04:15:06.687546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:06.942 [2024-11-21 04:15:06.687624] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:06.942 [2024-11-21 04:15:06.687691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:06.942 pt1 00:17:06.942 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.942 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:06.942 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.942 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.942 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.942 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.942 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:06.942 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.942 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.942 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.942 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.942 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.942 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.942 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.942 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.942 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.942 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.942 "name": "raid_bdev1", 00:17:06.942 "uuid": "40dc8f55-9b7e-4b21-a4a0-e4368ba6e242", 00:17:06.942 "strip_size_kb": 0, 00:17:06.942 "state": "configuring", 00:17:06.942 "raid_level": "raid1", 00:17:06.942 "superblock": true, 00:17:06.942 "num_base_bdevs": 2, 00:17:06.942 "num_base_bdevs_discovered": 1, 00:17:06.942 "num_base_bdevs_operational": 2, 00:17:06.942 "base_bdevs_list": [ 00:17:06.942 { 00:17:06.942 "name": "pt1", 00:17:06.942 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:06.942 "is_configured": true, 00:17:06.942 "data_offset": 256, 00:17:06.942 "data_size": 7936 00:17:06.942 }, 00:17:06.942 { 00:17:06.942 "name": null, 00:17:06.942 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:06.942 "is_configured": false, 00:17:06.942 "data_offset": 256, 00:17:06.942 "data_size": 7936 00:17:06.942 } 00:17:06.942 ] 00:17:06.942 }' 00:17:06.942 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.942 04:15:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:07.511 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:07.511 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:07.511 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:07.511 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:07.512 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.512 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:07.512 [2024-11-21 04:15:07.184397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:07.512 [2024-11-21 04:15:07.184493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.512 [2024-11-21 04:15:07.184533] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:07.512 [2024-11-21 04:15:07.184561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.512 [2024-11-21 04:15:07.184790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.512 [2024-11-21 04:15:07.184838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:07.512 [2024-11-21 04:15:07.184923] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:07.512 [2024-11-21 04:15:07.184979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:07.512 [2024-11-21 04:15:07.185113] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:17:07.512 [2024-11-21 04:15:07.185149] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:07.512 [2024-11-21 04:15:07.185283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:17:07.512 [2024-11-21 04:15:07.185381] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:17:07.512 [2024-11-21 04:15:07.185425] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:17:07.512 [2024-11-21 04:15:07.185541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.512 pt2 00:17:07.512 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.512 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:07.512 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:07.512 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:07.512 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.512 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.512 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.512 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.512 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:07.512 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.512 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.512 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.512 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.512 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.512 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.512 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:07.512 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.512 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.512 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.512 "name": "raid_bdev1", 00:17:07.512 "uuid": "40dc8f55-9b7e-4b21-a4a0-e4368ba6e242", 00:17:07.512 "strip_size_kb": 0, 00:17:07.512 "state": "online", 00:17:07.512 "raid_level": "raid1", 00:17:07.512 "superblock": true, 00:17:07.512 "num_base_bdevs": 2, 00:17:07.512 "num_base_bdevs_discovered": 2, 00:17:07.512 "num_base_bdevs_operational": 2, 00:17:07.512 "base_bdevs_list": [ 00:17:07.512 { 00:17:07.512 "name": "pt1", 00:17:07.512 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:07.512 "is_configured": true, 00:17:07.512 "data_offset": 256, 00:17:07.512 "data_size": 7936 00:17:07.512 }, 00:17:07.512 { 00:17:07.512 "name": "pt2", 00:17:07.512 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.512 "is_configured": true, 00:17:07.512 "data_offset": 256, 00:17:07.512 "data_size": 7936 00:17:07.512 } 00:17:07.512 ] 00:17:07.512 }' 00:17:07.512 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.512 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:07.773 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:07.773 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:07.773 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:07.773 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:07.773 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:07.773 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:07.773 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:07.773 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.773 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:07.773 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:07.773 [2024-11-21 04:15:07.655864] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.773 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.773 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:07.773 "name": "raid_bdev1", 00:17:07.773 "aliases": [ 00:17:07.773 "40dc8f55-9b7e-4b21-a4a0-e4368ba6e242" 00:17:07.773 ], 00:17:07.773 "product_name": "Raid Volume", 00:17:07.773 "block_size": 4128, 00:17:07.773 "num_blocks": 7936, 00:17:07.773 "uuid": "40dc8f55-9b7e-4b21-a4a0-e4368ba6e242", 00:17:07.773 "md_size": 32, 00:17:07.773 "md_interleave": true, 00:17:07.773 "dif_type": 0, 00:17:07.773 "assigned_rate_limits": { 00:17:07.773 "rw_ios_per_sec": 0, 00:17:07.773 "rw_mbytes_per_sec": 0, 00:17:07.773 "r_mbytes_per_sec": 0, 00:17:07.773 "w_mbytes_per_sec": 0 00:17:07.773 }, 00:17:07.773 "claimed": false, 00:17:07.773 "zoned": false, 00:17:07.773 "supported_io_types": { 00:17:07.773 "read": true, 00:17:07.773 "write": true, 00:17:07.773 "unmap": false, 00:17:07.773 "flush": false, 00:17:07.773 "reset": true, 00:17:07.773 "nvme_admin": false, 00:17:07.773 "nvme_io": false, 00:17:07.773 "nvme_io_md": false, 00:17:07.773 "write_zeroes": true, 00:17:07.773 "zcopy": false, 00:17:07.773 "get_zone_info": false, 00:17:07.773 "zone_management": false, 00:17:07.773 "zone_append": false, 00:17:07.773 "compare": false, 00:17:07.773 "compare_and_write": false, 00:17:07.773 "abort": false, 00:17:07.773 "seek_hole": false, 00:17:07.773 "seek_data": false, 00:17:07.773 "copy": false, 00:17:07.773 "nvme_iov_md": false 00:17:07.773 }, 00:17:07.773 "memory_domains": [ 00:17:07.773 { 00:17:07.773 "dma_device_id": "system", 00:17:07.773 "dma_device_type": 1 00:17:07.773 }, 00:17:07.773 { 00:17:07.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.773 "dma_device_type": 2 00:17:07.773 }, 00:17:07.773 { 00:17:07.773 "dma_device_id": "system", 00:17:07.773 "dma_device_type": 1 00:17:07.773 }, 00:17:07.773 { 00:17:07.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.773 "dma_device_type": 2 00:17:07.773 } 00:17:07.773 ], 00:17:07.773 "driver_specific": { 00:17:07.773 "raid": { 00:17:07.773 "uuid": "40dc8f55-9b7e-4b21-a4a0-e4368ba6e242", 00:17:07.773 "strip_size_kb": 0, 00:17:07.773 "state": "online", 00:17:07.773 "raid_level": "raid1", 00:17:07.773 "superblock": true, 00:17:07.773 "num_base_bdevs": 2, 00:17:07.773 "num_base_bdevs_discovered": 2, 00:17:07.773 "num_base_bdevs_operational": 2, 00:17:07.773 "base_bdevs_list": [ 00:17:07.773 { 00:17:07.773 "name": "pt1", 00:17:07.773 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:07.773 "is_configured": true, 00:17:07.773 "data_offset": 256, 00:17:07.773 "data_size": 7936 00:17:07.773 }, 00:17:07.773 { 00:17:07.773 "name": "pt2", 00:17:07.773 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.773 "is_configured": true, 00:17:07.773 "data_offset": 256, 00:17:07.773 "data_size": 7936 00:17:07.773 } 00:17:07.773 ] 00:17:07.773 } 00:17:07.773 } 00:17:07.773 }' 00:17:07.773 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:07.773 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:07.773 pt2' 00:17:07.773 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.034 [2024-11-21 04:15:07.899451] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 40dc8f55-9b7e-4b21-a4a0-e4368ba6e242 '!=' 40dc8f55-9b7e-4b21-a4a0-e4368ba6e242 ']' 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.034 [2024-11-21 04:15:07.943157] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.034 04:15:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.034 "name": "raid_bdev1", 00:17:08.034 "uuid": "40dc8f55-9b7e-4b21-a4a0-e4368ba6e242", 00:17:08.034 "strip_size_kb": 0, 00:17:08.034 "state": "online", 00:17:08.034 "raid_level": "raid1", 00:17:08.034 "superblock": true, 00:17:08.034 "num_base_bdevs": 2, 00:17:08.034 "num_base_bdevs_discovered": 1, 00:17:08.034 "num_base_bdevs_operational": 1, 00:17:08.034 "base_bdevs_list": [ 00:17:08.034 { 00:17:08.034 "name": null, 00:17:08.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.034 "is_configured": false, 00:17:08.034 "data_offset": 0, 00:17:08.034 "data_size": 7936 00:17:08.034 }, 00:17:08.034 { 00:17:08.034 "name": "pt2", 00:17:08.034 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.034 "is_configured": true, 00:17:08.034 "data_offset": 256, 00:17:08.034 "data_size": 7936 00:17:08.034 } 00:17:08.034 ] 00:17:08.034 }' 00:17:08.034 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.034 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.604 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:08.604 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.604 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.604 [2024-11-21 04:15:08.362393] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:08.604 [2024-11-21 04:15:08.362460] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:08.604 [2024-11-21 04:15:08.362562] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:08.605 [2024-11-21 04:15:08.362670] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:08.605 [2024-11-21 04:15:08.362719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.605 [2024-11-21 04:15:08.438260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:08.605 [2024-11-21 04:15:08.438362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.605 [2024-11-21 04:15:08.438401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:17:08.605 [2024-11-21 04:15:08.438439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.605 [2024-11-21 04:15:08.440744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.605 [2024-11-21 04:15:08.440815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:08.605 [2024-11-21 04:15:08.440905] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:08.605 [2024-11-21 04:15:08.440962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:08.605 [2024-11-21 04:15:08.441073] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:17:08.605 [2024-11-21 04:15:08.441108] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:08.605 [2024-11-21 04:15:08.441253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:17:08.605 [2024-11-21 04:15:08.441358] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:17:08.605 [2024-11-21 04:15:08.441395] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:17:08.605 [2024-11-21 04:15:08.441531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.605 pt2 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.605 "name": "raid_bdev1", 00:17:08.605 "uuid": "40dc8f55-9b7e-4b21-a4a0-e4368ba6e242", 00:17:08.605 "strip_size_kb": 0, 00:17:08.605 "state": "online", 00:17:08.605 "raid_level": "raid1", 00:17:08.605 "superblock": true, 00:17:08.605 "num_base_bdevs": 2, 00:17:08.605 "num_base_bdevs_discovered": 1, 00:17:08.605 "num_base_bdevs_operational": 1, 00:17:08.605 "base_bdevs_list": [ 00:17:08.605 { 00:17:08.605 "name": null, 00:17:08.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.605 "is_configured": false, 00:17:08.605 "data_offset": 256, 00:17:08.605 "data_size": 7936 00:17:08.605 }, 00:17:08.605 { 00:17:08.605 "name": "pt2", 00:17:08.605 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.605 "is_configured": true, 00:17:08.605 "data_offset": 256, 00:17:08.605 "data_size": 7936 00:17:08.605 } 00:17:08.605 ] 00:17:08.605 }' 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.605 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.175 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:09.175 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.175 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.175 [2024-11-21 04:15:08.877467] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:09.175 [2024-11-21 04:15:08.877489] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:09.175 [2024-11-21 04:15:08.877548] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.175 [2024-11-21 04:15:08.877586] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.175 [2024-11-21 04:15:08.877600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:17:09.175 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.175 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.175 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:09.175 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.175 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.175 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.175 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:09.175 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:09.175 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:09.175 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:09.175 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.175 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.175 [2024-11-21 04:15:08.937386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:09.175 [2024-11-21 04:15:08.937443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.175 [2024-11-21 04:15:08.937461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:17:09.175 [2024-11-21 04:15:08.937476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.175 [2024-11-21 04:15:08.939673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.175 [2024-11-21 04:15:08.939708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:09.175 [2024-11-21 04:15:08.939750] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:09.175 [2024-11-21 04:15:08.939781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:09.175 [2024-11-21 04:15:08.939866] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:09.175 [2024-11-21 04:15:08.939880] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:09.175 [2024-11-21 04:15:08.939903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:17:09.175 [2024-11-21 04:15:08.939936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:09.175 [2024-11-21 04:15:08.939996] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:17:09.175 [2024-11-21 04:15:08.940008] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:09.175 [2024-11-21 04:15:08.940082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:17:09.176 [2024-11-21 04:15:08.940132] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:17:09.176 [2024-11-21 04:15:08.940139] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:17:09.176 [2024-11-21 04:15:08.940198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.176 pt1 00:17:09.176 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.176 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:09.176 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:09.176 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.176 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.176 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.176 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.176 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:09.176 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.176 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.176 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.176 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.176 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.176 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.176 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.176 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.176 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.176 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.176 "name": "raid_bdev1", 00:17:09.176 "uuid": "40dc8f55-9b7e-4b21-a4a0-e4368ba6e242", 00:17:09.176 "strip_size_kb": 0, 00:17:09.176 "state": "online", 00:17:09.176 "raid_level": "raid1", 00:17:09.176 "superblock": true, 00:17:09.176 "num_base_bdevs": 2, 00:17:09.176 "num_base_bdevs_discovered": 1, 00:17:09.176 "num_base_bdevs_operational": 1, 00:17:09.176 "base_bdevs_list": [ 00:17:09.176 { 00:17:09.176 "name": null, 00:17:09.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.176 "is_configured": false, 00:17:09.176 "data_offset": 256, 00:17:09.176 "data_size": 7936 00:17:09.176 }, 00:17:09.176 { 00:17:09.176 "name": "pt2", 00:17:09.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:09.176 "is_configured": true, 00:17:09.176 "data_offset": 256, 00:17:09.176 "data_size": 7936 00:17:09.176 } 00:17:09.176 ] 00:17:09.176 }' 00:17:09.176 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.176 04:15:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.756 04:15:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:09.756 04:15:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:09.757 04:15:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.757 04:15:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.757 04:15:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.757 04:15:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:09.757 04:15:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:09.757 04:15:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.757 04:15:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.757 04:15:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:09.757 [2024-11-21 04:15:09.484711] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:09.757 04:15:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.757 04:15:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 40dc8f55-9b7e-4b21-a4a0-e4368ba6e242 '!=' 40dc8f55-9b7e-4b21-a4a0-e4368ba6e242 ']' 00:17:09.757 04:15:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 99080 00:17:09.757 04:15:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 99080 ']' 00:17:09.757 04:15:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 99080 00:17:09.757 04:15:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:09.757 04:15:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.757 04:15:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99080 00:17:09.757 04:15:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:09.757 04:15:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:09.757 04:15:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99080' 00:17:09.757 killing process with pid 99080 00:17:09.757 04:15:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 99080 00:17:09.757 [2024-11-21 04:15:09.569891] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:09.757 [2024-11-21 04:15:09.570010] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.757 04:15:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 99080 00:17:09.757 [2024-11-21 04:15:09.570090] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.757 [2024-11-21 04:15:09.570107] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:17:09.757 [2024-11-21 04:15:09.614292] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:10.016 04:15:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:17:10.016 00:17:10.016 real 0m5.186s 00:17:10.016 user 0m8.335s 00:17:10.016 sys 0m1.171s 00:17:10.016 04:15:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.016 ************************************ 00:17:10.016 END TEST raid_superblock_test_md_interleaved 00:17:10.016 ************************************ 00:17:10.016 04:15:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.279 04:15:10 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:17:10.279 04:15:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:10.279 04:15:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.279 04:15:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:10.279 ************************************ 00:17:10.279 START TEST raid_rebuild_test_sb_md_interleaved 00:17:10.279 ************************************ 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=99392 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 99392 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 99392 ']' 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:10.279 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.279 [2024-11-21 04:15:10.118561] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:17:10.279 [2024-11-21 04:15:10.118806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:10.279 Zero copy mechanism will not be used. 00:17:10.279 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99392 ] 00:17:10.539 [2024-11-21 04:15:10.270046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.540 [2024-11-21 04:15:10.311811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.540 [2024-11-21 04:15:10.388781] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:10.540 [2024-11-21 04:15:10.388910] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:11.110 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.110 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:11.110 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:11.110 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:17:11.110 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.110 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:11.110 BaseBdev1_malloc 00:17:11.110 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.110 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:11.110 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.110 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:11.110 [2024-11-21 04:15:10.975739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:11.110 [2024-11-21 04:15:10.975809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.110 [2024-11-21 04:15:10.975840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:17:11.110 [2024-11-21 04:15:10.975856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.110 [2024-11-21 04:15:10.978139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.110 [2024-11-21 04:15:10.978179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:11.110 BaseBdev1 00:17:11.110 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.110 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:11.110 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:17:11.110 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.110 04:15:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:11.110 BaseBdev2_malloc 00:17:11.110 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.110 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:11.110 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.110 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:11.110 [2024-11-21 04:15:11.010736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:11.110 [2024-11-21 04:15:11.010783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.110 [2024-11-21 04:15:11.010806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:11.110 [2024-11-21 04:15:11.010817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.110 [2024-11-21 04:15:11.012985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.110 [2024-11-21 04:15:11.013024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:11.110 BaseBdev2 00:17:11.110 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.110 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:17:11.110 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.110 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:11.110 spare_malloc 00:17:11.110 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.110 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:11.110 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.110 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:11.110 spare_delay 00:17:11.110 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.110 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:11.110 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.110 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:11.110 [2024-11-21 04:15:11.057705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:11.110 [2024-11-21 04:15:11.057754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.110 [2024-11-21 04:15:11.057776] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:11.110 [2024-11-21 04:15:11.057784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.110 [2024-11-21 04:15:11.059928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.110 [2024-11-21 04:15:11.059959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:11.110 spare 00:17:11.110 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.110 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:11.110 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.110 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:11.110 [2024-11-21 04:15:11.069739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:11.110 [2024-11-21 04:15:11.071805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:11.110 [2024-11-21 04:15:11.071972] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:17:11.110 [2024-11-21 04:15:11.071995] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:11.110 [2024-11-21 04:15:11.072076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:17:11.110 [2024-11-21 04:15:11.072148] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:17:11.110 [2024-11-21 04:15:11.072161] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:17:11.110 [2024-11-21 04:15:11.072217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.110 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.110 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:11.110 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.110 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.110 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.110 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.111 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:11.111 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.111 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.111 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.111 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.370 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.370 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.370 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:11.370 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.370 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.370 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.370 "name": "raid_bdev1", 00:17:11.370 "uuid": "c0e8d541-12ea-4722-8987-b0bba10cc952", 00:17:11.370 "strip_size_kb": 0, 00:17:11.370 "state": "online", 00:17:11.370 "raid_level": "raid1", 00:17:11.370 "superblock": true, 00:17:11.370 "num_base_bdevs": 2, 00:17:11.370 "num_base_bdevs_discovered": 2, 00:17:11.370 "num_base_bdevs_operational": 2, 00:17:11.370 "base_bdevs_list": [ 00:17:11.370 { 00:17:11.370 "name": "BaseBdev1", 00:17:11.370 "uuid": "832ef589-30fb-5ff8-98be-e2e7ea08b1c5", 00:17:11.370 "is_configured": true, 00:17:11.370 "data_offset": 256, 00:17:11.370 "data_size": 7936 00:17:11.370 }, 00:17:11.370 { 00:17:11.370 "name": "BaseBdev2", 00:17:11.370 "uuid": "7f59610c-e66a-557d-bfcd-6575faea8b08", 00:17:11.370 "is_configured": true, 00:17:11.370 "data_offset": 256, 00:17:11.370 "data_size": 7936 00:17:11.370 } 00:17:11.370 ] 00:17:11.370 }' 00:17:11.370 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.370 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:11.629 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:11.629 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:11.629 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.629 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:11.629 [2024-11-21 04:15:11.545163] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:11.629 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.629 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:11.629 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:11.629 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.629 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.629 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:11.889 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.889 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:11.889 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:11.889 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:17:11.889 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:11.889 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.889 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:11.889 [2024-11-21 04:15:11.636699] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:11.889 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.889 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:11.889 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.889 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.889 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.889 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.889 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:11.889 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.889 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.889 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.889 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.889 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.889 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.889 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.889 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:11.889 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.889 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.889 "name": "raid_bdev1", 00:17:11.889 "uuid": "c0e8d541-12ea-4722-8987-b0bba10cc952", 00:17:11.889 "strip_size_kb": 0, 00:17:11.889 "state": "online", 00:17:11.889 "raid_level": "raid1", 00:17:11.889 "superblock": true, 00:17:11.889 "num_base_bdevs": 2, 00:17:11.889 "num_base_bdevs_discovered": 1, 00:17:11.889 "num_base_bdevs_operational": 1, 00:17:11.889 "base_bdevs_list": [ 00:17:11.889 { 00:17:11.889 "name": null, 00:17:11.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.889 "is_configured": false, 00:17:11.889 "data_offset": 0, 00:17:11.889 "data_size": 7936 00:17:11.889 }, 00:17:11.889 { 00:17:11.889 "name": "BaseBdev2", 00:17:11.889 "uuid": "7f59610c-e66a-557d-bfcd-6575faea8b08", 00:17:11.889 "is_configured": true, 00:17:11.889 "data_offset": 256, 00:17:11.889 "data_size": 7936 00:17:11.889 } 00:17:11.889 ] 00:17:11.889 }' 00:17:11.889 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.889 04:15:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:12.149 04:15:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:12.149 04:15:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.149 04:15:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:12.149 [2024-11-21 04:15:12.099930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:12.149 [2024-11-21 04:15:12.119717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:17:12.150 04:15:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.150 04:15:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:12.409 [2024-11-21 04:15:12.126985] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:13.349 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.349 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.349 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.349 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.349 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.349 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.349 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.349 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.349 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.350 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.350 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.350 "name": "raid_bdev1", 00:17:13.350 "uuid": "c0e8d541-12ea-4722-8987-b0bba10cc952", 00:17:13.350 "strip_size_kb": 0, 00:17:13.350 "state": "online", 00:17:13.350 "raid_level": "raid1", 00:17:13.350 "superblock": true, 00:17:13.350 "num_base_bdevs": 2, 00:17:13.350 "num_base_bdevs_discovered": 2, 00:17:13.350 "num_base_bdevs_operational": 2, 00:17:13.350 "process": { 00:17:13.350 "type": "rebuild", 00:17:13.350 "target": "spare", 00:17:13.350 "progress": { 00:17:13.350 "blocks": 2560, 00:17:13.350 "percent": 32 00:17:13.350 } 00:17:13.350 }, 00:17:13.350 "base_bdevs_list": [ 00:17:13.350 { 00:17:13.350 "name": "spare", 00:17:13.350 "uuid": "584e5652-62c5-5ab8-a633-c702a1abd3c8", 00:17:13.350 "is_configured": true, 00:17:13.350 "data_offset": 256, 00:17:13.350 "data_size": 7936 00:17:13.350 }, 00:17:13.350 { 00:17:13.350 "name": "BaseBdev2", 00:17:13.350 "uuid": "7f59610c-e66a-557d-bfcd-6575faea8b08", 00:17:13.350 "is_configured": true, 00:17:13.350 "data_offset": 256, 00:17:13.350 "data_size": 7936 00:17:13.350 } 00:17:13.350 ] 00:17:13.350 }' 00:17:13.350 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.350 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:13.350 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.350 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.350 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:13.350 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.350 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.350 [2024-11-21 04:15:13.278555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:13.611 [2024-11-21 04:15:13.336121] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:13.611 [2024-11-21 04:15:13.336175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.611 [2024-11-21 04:15:13.336194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:13.611 [2024-11-21 04:15:13.336202] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:13.611 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.611 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:13.611 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.611 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.611 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.611 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.611 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:13.611 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.611 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.611 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.611 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.611 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.611 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.611 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.611 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.611 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.611 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.611 "name": "raid_bdev1", 00:17:13.611 "uuid": "c0e8d541-12ea-4722-8987-b0bba10cc952", 00:17:13.611 "strip_size_kb": 0, 00:17:13.611 "state": "online", 00:17:13.611 "raid_level": "raid1", 00:17:13.611 "superblock": true, 00:17:13.611 "num_base_bdevs": 2, 00:17:13.611 "num_base_bdevs_discovered": 1, 00:17:13.611 "num_base_bdevs_operational": 1, 00:17:13.611 "base_bdevs_list": [ 00:17:13.611 { 00:17:13.611 "name": null, 00:17:13.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.611 "is_configured": false, 00:17:13.611 "data_offset": 0, 00:17:13.611 "data_size": 7936 00:17:13.611 }, 00:17:13.611 { 00:17:13.611 "name": "BaseBdev2", 00:17:13.611 "uuid": "7f59610c-e66a-557d-bfcd-6575faea8b08", 00:17:13.611 "is_configured": true, 00:17:13.611 "data_offset": 256, 00:17:13.611 "data_size": 7936 00:17:13.611 } 00:17:13.611 ] 00:17:13.611 }' 00:17:13.611 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.611 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.871 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:13.871 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.871 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:13.871 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:13.871 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.871 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.871 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.871 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.871 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.871 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.871 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.872 "name": "raid_bdev1", 00:17:13.872 "uuid": "c0e8d541-12ea-4722-8987-b0bba10cc952", 00:17:13.872 "strip_size_kb": 0, 00:17:13.872 "state": "online", 00:17:13.872 "raid_level": "raid1", 00:17:13.872 "superblock": true, 00:17:13.872 "num_base_bdevs": 2, 00:17:13.872 "num_base_bdevs_discovered": 1, 00:17:13.872 "num_base_bdevs_operational": 1, 00:17:13.872 "base_bdevs_list": [ 00:17:13.872 { 00:17:13.872 "name": null, 00:17:13.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.872 "is_configured": false, 00:17:13.872 "data_offset": 0, 00:17:13.872 "data_size": 7936 00:17:13.872 }, 00:17:13.872 { 00:17:13.872 "name": "BaseBdev2", 00:17:13.872 "uuid": "7f59610c-e66a-557d-bfcd-6575faea8b08", 00:17:13.872 "is_configured": true, 00:17:13.872 "data_offset": 256, 00:17:13.872 "data_size": 7936 00:17:13.872 } 00:17:13.872 ] 00:17:13.872 }' 00:17:13.872 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.132 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:14.132 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.132 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:14.132 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:14.132 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.132 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:14.132 [2024-11-21 04:15:13.906016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:14.132 [2024-11-21 04:15:13.912050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:17:14.132 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.132 04:15:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:14.132 [2024-11-21 04:15:13.914322] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:15.073 04:15:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.073 04:15:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.073 04:15:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.073 04:15:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.073 04:15:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.073 04:15:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.073 04:15:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.073 04:15:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.073 04:15:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:15.073 04:15:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.073 04:15:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.073 "name": "raid_bdev1", 00:17:15.073 "uuid": "c0e8d541-12ea-4722-8987-b0bba10cc952", 00:17:15.073 "strip_size_kb": 0, 00:17:15.073 "state": "online", 00:17:15.073 "raid_level": "raid1", 00:17:15.073 "superblock": true, 00:17:15.073 "num_base_bdevs": 2, 00:17:15.073 "num_base_bdevs_discovered": 2, 00:17:15.073 "num_base_bdevs_operational": 2, 00:17:15.073 "process": { 00:17:15.073 "type": "rebuild", 00:17:15.073 "target": "spare", 00:17:15.073 "progress": { 00:17:15.073 "blocks": 2560, 00:17:15.073 "percent": 32 00:17:15.073 } 00:17:15.073 }, 00:17:15.073 "base_bdevs_list": [ 00:17:15.073 { 00:17:15.073 "name": "spare", 00:17:15.073 "uuid": "584e5652-62c5-5ab8-a633-c702a1abd3c8", 00:17:15.073 "is_configured": true, 00:17:15.073 "data_offset": 256, 00:17:15.073 "data_size": 7936 00:17:15.073 }, 00:17:15.073 { 00:17:15.073 "name": "BaseBdev2", 00:17:15.073 "uuid": "7f59610c-e66a-557d-bfcd-6575faea8b08", 00:17:15.073 "is_configured": true, 00:17:15.073 "data_offset": 256, 00:17:15.073 "data_size": 7936 00:17:15.073 } 00:17:15.073 ] 00:17:15.073 }' 00:17:15.073 04:15:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.073 04:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.073 04:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.334 04:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.334 04:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:15.334 04:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:15.334 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:15.334 04:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:15.334 04:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:15.334 04:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:15.334 04:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=631 00:17:15.334 04:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:15.334 04:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.334 04:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.334 04:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.334 04:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.334 04:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.334 04:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.334 04:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.334 04:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.334 04:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:15.334 04:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.334 04:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.334 "name": "raid_bdev1", 00:17:15.334 "uuid": "c0e8d541-12ea-4722-8987-b0bba10cc952", 00:17:15.334 "strip_size_kb": 0, 00:17:15.334 "state": "online", 00:17:15.334 "raid_level": "raid1", 00:17:15.334 "superblock": true, 00:17:15.334 "num_base_bdevs": 2, 00:17:15.334 "num_base_bdevs_discovered": 2, 00:17:15.334 "num_base_bdevs_operational": 2, 00:17:15.334 "process": { 00:17:15.334 "type": "rebuild", 00:17:15.334 "target": "spare", 00:17:15.334 "progress": { 00:17:15.334 "blocks": 2816, 00:17:15.334 "percent": 35 00:17:15.334 } 00:17:15.334 }, 00:17:15.334 "base_bdevs_list": [ 00:17:15.334 { 00:17:15.334 "name": "spare", 00:17:15.334 "uuid": "584e5652-62c5-5ab8-a633-c702a1abd3c8", 00:17:15.334 "is_configured": true, 00:17:15.334 "data_offset": 256, 00:17:15.334 "data_size": 7936 00:17:15.334 }, 00:17:15.334 { 00:17:15.334 "name": "BaseBdev2", 00:17:15.334 "uuid": "7f59610c-e66a-557d-bfcd-6575faea8b08", 00:17:15.334 "is_configured": true, 00:17:15.334 "data_offset": 256, 00:17:15.334 "data_size": 7936 00:17:15.334 } 00:17:15.334 ] 00:17:15.334 }' 00:17:15.334 04:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.334 04:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.334 04:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.334 04:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.334 04:15:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:16.274 04:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:16.274 04:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.274 04:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.274 04:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.274 04:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.274 04:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.274 04:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.274 04:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.274 04:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.274 04:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:16.274 04:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.534 04:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.534 "name": "raid_bdev1", 00:17:16.534 "uuid": "c0e8d541-12ea-4722-8987-b0bba10cc952", 00:17:16.534 "strip_size_kb": 0, 00:17:16.534 "state": "online", 00:17:16.534 "raid_level": "raid1", 00:17:16.534 "superblock": true, 00:17:16.534 "num_base_bdevs": 2, 00:17:16.534 "num_base_bdevs_discovered": 2, 00:17:16.534 "num_base_bdevs_operational": 2, 00:17:16.534 "process": { 00:17:16.534 "type": "rebuild", 00:17:16.534 "target": "spare", 00:17:16.534 "progress": { 00:17:16.534 "blocks": 5632, 00:17:16.534 "percent": 70 00:17:16.534 } 00:17:16.534 }, 00:17:16.534 "base_bdevs_list": [ 00:17:16.534 { 00:17:16.534 "name": "spare", 00:17:16.534 "uuid": "584e5652-62c5-5ab8-a633-c702a1abd3c8", 00:17:16.535 "is_configured": true, 00:17:16.535 "data_offset": 256, 00:17:16.535 "data_size": 7936 00:17:16.535 }, 00:17:16.535 { 00:17:16.535 "name": "BaseBdev2", 00:17:16.535 "uuid": "7f59610c-e66a-557d-bfcd-6575faea8b08", 00:17:16.535 "is_configured": true, 00:17:16.535 "data_offset": 256, 00:17:16.535 "data_size": 7936 00:17:16.535 } 00:17:16.535 ] 00:17:16.535 }' 00:17:16.535 04:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.535 04:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.535 04:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.535 04:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.535 04:15:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:17.105 [2024-11-21 04:15:17.034550] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:17.105 [2024-11-21 04:15:17.034703] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:17.105 [2024-11-21 04:15:17.034854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.674 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:17.674 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.674 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.674 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.674 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.674 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.674 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.674 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.674 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.674 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:17.674 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.674 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.674 "name": "raid_bdev1", 00:17:17.674 "uuid": "c0e8d541-12ea-4722-8987-b0bba10cc952", 00:17:17.674 "strip_size_kb": 0, 00:17:17.674 "state": "online", 00:17:17.674 "raid_level": "raid1", 00:17:17.674 "superblock": true, 00:17:17.674 "num_base_bdevs": 2, 00:17:17.674 "num_base_bdevs_discovered": 2, 00:17:17.674 "num_base_bdevs_operational": 2, 00:17:17.674 "base_bdevs_list": [ 00:17:17.674 { 00:17:17.674 "name": "spare", 00:17:17.674 "uuid": "584e5652-62c5-5ab8-a633-c702a1abd3c8", 00:17:17.674 "is_configured": true, 00:17:17.674 "data_offset": 256, 00:17:17.674 "data_size": 7936 00:17:17.674 }, 00:17:17.674 { 00:17:17.674 "name": "BaseBdev2", 00:17:17.674 "uuid": "7f59610c-e66a-557d-bfcd-6575faea8b08", 00:17:17.674 "is_configured": true, 00:17:17.674 "data_offset": 256, 00:17:17.674 "data_size": 7936 00:17:17.674 } 00:17:17.674 ] 00:17:17.674 }' 00:17:17.674 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.674 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:17.674 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.674 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:17.674 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:17:17.674 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:17.674 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.674 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:17.674 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:17.674 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.674 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.674 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.674 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.675 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:17.675 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.675 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.675 "name": "raid_bdev1", 00:17:17.675 "uuid": "c0e8d541-12ea-4722-8987-b0bba10cc952", 00:17:17.675 "strip_size_kb": 0, 00:17:17.675 "state": "online", 00:17:17.675 "raid_level": "raid1", 00:17:17.675 "superblock": true, 00:17:17.675 "num_base_bdevs": 2, 00:17:17.675 "num_base_bdevs_discovered": 2, 00:17:17.675 "num_base_bdevs_operational": 2, 00:17:17.675 "base_bdevs_list": [ 00:17:17.675 { 00:17:17.675 "name": "spare", 00:17:17.675 "uuid": "584e5652-62c5-5ab8-a633-c702a1abd3c8", 00:17:17.675 "is_configured": true, 00:17:17.675 "data_offset": 256, 00:17:17.675 "data_size": 7936 00:17:17.675 }, 00:17:17.675 { 00:17:17.675 "name": "BaseBdev2", 00:17:17.675 "uuid": "7f59610c-e66a-557d-bfcd-6575faea8b08", 00:17:17.675 "is_configured": true, 00:17:17.675 "data_offset": 256, 00:17:17.675 "data_size": 7936 00:17:17.675 } 00:17:17.675 ] 00:17:17.675 }' 00:17:17.675 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.675 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:17.675 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.675 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:17.675 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:17.675 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.675 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.675 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.675 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.675 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:17.675 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.675 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.675 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.675 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.675 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.675 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.675 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.675 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:17.934 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.934 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.934 "name": "raid_bdev1", 00:17:17.934 "uuid": "c0e8d541-12ea-4722-8987-b0bba10cc952", 00:17:17.934 "strip_size_kb": 0, 00:17:17.934 "state": "online", 00:17:17.934 "raid_level": "raid1", 00:17:17.934 "superblock": true, 00:17:17.934 "num_base_bdevs": 2, 00:17:17.934 "num_base_bdevs_discovered": 2, 00:17:17.934 "num_base_bdevs_operational": 2, 00:17:17.934 "base_bdevs_list": [ 00:17:17.934 { 00:17:17.934 "name": "spare", 00:17:17.934 "uuid": "584e5652-62c5-5ab8-a633-c702a1abd3c8", 00:17:17.934 "is_configured": true, 00:17:17.934 "data_offset": 256, 00:17:17.934 "data_size": 7936 00:17:17.934 }, 00:17:17.934 { 00:17:17.934 "name": "BaseBdev2", 00:17:17.934 "uuid": "7f59610c-e66a-557d-bfcd-6575faea8b08", 00:17:17.934 "is_configured": true, 00:17:17.934 "data_offset": 256, 00:17:17.934 "data_size": 7936 00:17:17.934 } 00:17:17.934 ] 00:17:17.934 }' 00:17:17.934 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.934 04:15:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:18.194 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:18.194 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.194 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:18.194 [2024-11-21 04:15:18.068208] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:18.194 [2024-11-21 04:15:18.068297] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:18.194 [2024-11-21 04:15:18.068455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:18.194 [2024-11-21 04:15:18.068560] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:18.194 [2024-11-21 04:15:18.068609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:17:18.194 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.194 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:17:18.194 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.194 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.194 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:18.194 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.194 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:18.194 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:17:18.194 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:18.194 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:18.194 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.194 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:18.194 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.194 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:18.194 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.194 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:18.194 [2024-11-21 04:15:18.124130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:18.194 [2024-11-21 04:15:18.124192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.194 [2024-11-21 04:15:18.124214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:18.194 [2024-11-21 04:15:18.124240] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.194 [2024-11-21 04:15:18.126568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.194 [2024-11-21 04:15:18.126605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:18.194 [2024-11-21 04:15:18.126657] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:18.194 [2024-11-21 04:15:18.126711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:18.194 [2024-11-21 04:15:18.126812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:18.194 spare 00:17:18.194 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.194 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:18.194 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.194 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:18.454 [2024-11-21 04:15:18.226708] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:17:18.454 [2024-11-21 04:15:18.226732] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:18.454 [2024-11-21 04:15:18.226830] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:17:18.454 [2024-11-21 04:15:18.226922] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:17:18.454 [2024-11-21 04:15:18.226934] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:17:18.454 [2024-11-21 04:15:18.227008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.454 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.454 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:18.454 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.454 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.454 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.454 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.454 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:18.454 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.454 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.454 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.454 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.454 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.454 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.454 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.454 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:18.454 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.454 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.454 "name": "raid_bdev1", 00:17:18.454 "uuid": "c0e8d541-12ea-4722-8987-b0bba10cc952", 00:17:18.454 "strip_size_kb": 0, 00:17:18.454 "state": "online", 00:17:18.454 "raid_level": "raid1", 00:17:18.454 "superblock": true, 00:17:18.454 "num_base_bdevs": 2, 00:17:18.454 "num_base_bdevs_discovered": 2, 00:17:18.455 "num_base_bdevs_operational": 2, 00:17:18.455 "base_bdevs_list": [ 00:17:18.455 { 00:17:18.455 "name": "spare", 00:17:18.455 "uuid": "584e5652-62c5-5ab8-a633-c702a1abd3c8", 00:17:18.455 "is_configured": true, 00:17:18.455 "data_offset": 256, 00:17:18.455 "data_size": 7936 00:17:18.455 }, 00:17:18.455 { 00:17:18.455 "name": "BaseBdev2", 00:17:18.455 "uuid": "7f59610c-e66a-557d-bfcd-6575faea8b08", 00:17:18.455 "is_configured": true, 00:17:18.455 "data_offset": 256, 00:17:18.455 "data_size": 7936 00:17:18.455 } 00:17:18.455 ] 00:17:18.455 }' 00:17:18.455 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.455 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:18.716 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:18.716 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.716 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:18.716 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:18.716 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.716 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.716 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.716 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.716 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:18.716 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.977 "name": "raid_bdev1", 00:17:18.977 "uuid": "c0e8d541-12ea-4722-8987-b0bba10cc952", 00:17:18.977 "strip_size_kb": 0, 00:17:18.977 "state": "online", 00:17:18.977 "raid_level": "raid1", 00:17:18.977 "superblock": true, 00:17:18.977 "num_base_bdevs": 2, 00:17:18.977 "num_base_bdevs_discovered": 2, 00:17:18.977 "num_base_bdevs_operational": 2, 00:17:18.977 "base_bdevs_list": [ 00:17:18.977 { 00:17:18.977 "name": "spare", 00:17:18.977 "uuid": "584e5652-62c5-5ab8-a633-c702a1abd3c8", 00:17:18.977 "is_configured": true, 00:17:18.977 "data_offset": 256, 00:17:18.977 "data_size": 7936 00:17:18.977 }, 00:17:18.977 { 00:17:18.977 "name": "BaseBdev2", 00:17:18.977 "uuid": "7f59610c-e66a-557d-bfcd-6575faea8b08", 00:17:18.977 "is_configured": true, 00:17:18.977 "data_offset": 256, 00:17:18.977 "data_size": 7936 00:17:18.977 } 00:17:18.977 ] 00:17:18.977 }' 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:18.977 [2024-11-21 04:15:18.842926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.977 "name": "raid_bdev1", 00:17:18.977 "uuid": "c0e8d541-12ea-4722-8987-b0bba10cc952", 00:17:18.977 "strip_size_kb": 0, 00:17:18.977 "state": "online", 00:17:18.977 "raid_level": "raid1", 00:17:18.977 "superblock": true, 00:17:18.977 "num_base_bdevs": 2, 00:17:18.977 "num_base_bdevs_discovered": 1, 00:17:18.977 "num_base_bdevs_operational": 1, 00:17:18.977 "base_bdevs_list": [ 00:17:18.977 { 00:17:18.977 "name": null, 00:17:18.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.977 "is_configured": false, 00:17:18.977 "data_offset": 0, 00:17:18.977 "data_size": 7936 00:17:18.977 }, 00:17:18.977 { 00:17:18.977 "name": "BaseBdev2", 00:17:18.977 "uuid": "7f59610c-e66a-557d-bfcd-6575faea8b08", 00:17:18.977 "is_configured": true, 00:17:18.977 "data_offset": 256, 00:17:18.977 "data_size": 7936 00:17:18.977 } 00:17:18.977 ] 00:17:18.977 }' 00:17:18.977 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.978 04:15:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:19.547 04:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:19.547 04:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.547 04:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:19.547 [2024-11-21 04:15:19.314124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:19.547 [2024-11-21 04:15:19.314360] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:19.547 [2024-11-21 04:15:19.314422] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:19.547 [2024-11-21 04:15:19.314489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:19.547 [2024-11-21 04:15:19.320816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:17:19.548 04:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.548 04:15:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:19.548 [2024-11-21 04:15:19.323138] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:20.488 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.488 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.488 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.488 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.488 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.488 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.488 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.488 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.488 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:20.488 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.488 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.488 "name": "raid_bdev1", 00:17:20.488 "uuid": "c0e8d541-12ea-4722-8987-b0bba10cc952", 00:17:20.488 "strip_size_kb": 0, 00:17:20.488 "state": "online", 00:17:20.488 "raid_level": "raid1", 00:17:20.488 "superblock": true, 00:17:20.488 "num_base_bdevs": 2, 00:17:20.488 "num_base_bdevs_discovered": 2, 00:17:20.488 "num_base_bdevs_operational": 2, 00:17:20.488 "process": { 00:17:20.488 "type": "rebuild", 00:17:20.488 "target": "spare", 00:17:20.488 "progress": { 00:17:20.488 "blocks": 2560, 00:17:20.488 "percent": 32 00:17:20.488 } 00:17:20.488 }, 00:17:20.488 "base_bdevs_list": [ 00:17:20.488 { 00:17:20.488 "name": "spare", 00:17:20.488 "uuid": "584e5652-62c5-5ab8-a633-c702a1abd3c8", 00:17:20.488 "is_configured": true, 00:17:20.488 "data_offset": 256, 00:17:20.488 "data_size": 7936 00:17:20.488 }, 00:17:20.488 { 00:17:20.488 "name": "BaseBdev2", 00:17:20.488 "uuid": "7f59610c-e66a-557d-bfcd-6575faea8b08", 00:17:20.488 "is_configured": true, 00:17:20.488 "data_offset": 256, 00:17:20.488 "data_size": 7936 00:17:20.488 } 00:17:20.488 ] 00:17:20.488 }' 00:17:20.488 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.488 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.488 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.749 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.749 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:20.749 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.749 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:20.749 [2024-11-21 04:15:20.475240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:20.749 [2024-11-21 04:15:20.530927] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:20.749 [2024-11-21 04:15:20.531034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.749 [2024-11-21 04:15:20.531055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:20.749 [2024-11-21 04:15:20.531063] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:20.749 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.749 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:20.749 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.749 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.749 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.749 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.749 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:20.749 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.749 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.749 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.749 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.749 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.749 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.749 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.749 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:20.749 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.749 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.749 "name": "raid_bdev1", 00:17:20.749 "uuid": "c0e8d541-12ea-4722-8987-b0bba10cc952", 00:17:20.749 "strip_size_kb": 0, 00:17:20.749 "state": "online", 00:17:20.749 "raid_level": "raid1", 00:17:20.749 "superblock": true, 00:17:20.749 "num_base_bdevs": 2, 00:17:20.749 "num_base_bdevs_discovered": 1, 00:17:20.749 "num_base_bdevs_operational": 1, 00:17:20.749 "base_bdevs_list": [ 00:17:20.749 { 00:17:20.749 "name": null, 00:17:20.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.749 "is_configured": false, 00:17:20.749 "data_offset": 0, 00:17:20.749 "data_size": 7936 00:17:20.749 }, 00:17:20.749 { 00:17:20.749 "name": "BaseBdev2", 00:17:20.749 "uuid": "7f59610c-e66a-557d-bfcd-6575faea8b08", 00:17:20.749 "is_configured": true, 00:17:20.749 "data_offset": 256, 00:17:20.749 "data_size": 7936 00:17:20.749 } 00:17:20.749 ] 00:17:20.749 }' 00:17:20.749 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.749 04:15:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:21.320 04:15:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:21.320 04:15:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.320 04:15:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:21.320 [2024-11-21 04:15:21.009185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:21.320 [2024-11-21 04:15:21.009295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.320 [2024-11-21 04:15:21.009342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:21.320 [2024-11-21 04:15:21.009382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.320 [2024-11-21 04:15:21.009653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.320 [2024-11-21 04:15:21.009699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:21.320 [2024-11-21 04:15:21.009792] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:21.320 [2024-11-21 04:15:21.009829] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:21.320 [2024-11-21 04:15:21.009901] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:21.320 [2024-11-21 04:15:21.009973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:21.320 [2024-11-21 04:15:21.014719] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:17:21.320 spare 00:17:21.320 04:15:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.320 04:15:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:21.320 [2024-11-21 04:15:21.016949] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:22.259 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.259 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.259 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.259 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.260 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.260 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.260 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.260 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.260 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:22.260 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.260 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.260 "name": "raid_bdev1", 00:17:22.260 "uuid": "c0e8d541-12ea-4722-8987-b0bba10cc952", 00:17:22.260 "strip_size_kb": 0, 00:17:22.260 "state": "online", 00:17:22.260 "raid_level": "raid1", 00:17:22.260 "superblock": true, 00:17:22.260 "num_base_bdevs": 2, 00:17:22.260 "num_base_bdevs_discovered": 2, 00:17:22.260 "num_base_bdevs_operational": 2, 00:17:22.260 "process": { 00:17:22.260 "type": "rebuild", 00:17:22.260 "target": "spare", 00:17:22.260 "progress": { 00:17:22.260 "blocks": 2560, 00:17:22.260 "percent": 32 00:17:22.260 } 00:17:22.260 }, 00:17:22.260 "base_bdevs_list": [ 00:17:22.260 { 00:17:22.260 "name": "spare", 00:17:22.260 "uuid": "584e5652-62c5-5ab8-a633-c702a1abd3c8", 00:17:22.260 "is_configured": true, 00:17:22.260 "data_offset": 256, 00:17:22.260 "data_size": 7936 00:17:22.260 }, 00:17:22.260 { 00:17:22.260 "name": "BaseBdev2", 00:17:22.260 "uuid": "7f59610c-e66a-557d-bfcd-6575faea8b08", 00:17:22.260 "is_configured": true, 00:17:22.260 "data_offset": 256, 00:17:22.260 "data_size": 7936 00:17:22.260 } 00:17:22.260 ] 00:17:22.260 }' 00:17:22.260 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.260 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:22.260 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.260 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.260 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:22.260 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.260 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:22.260 [2024-11-21 04:15:22.152978] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:22.260 [2024-11-21 04:15:22.224658] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:22.260 [2024-11-21 04:15:22.224718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.260 [2024-11-21 04:15:22.224732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:22.260 [2024-11-21 04:15:22.224742] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:22.520 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.520 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:22.520 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.520 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.520 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:22.520 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:22.520 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:22.520 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.520 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.520 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.520 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.520 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.520 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.520 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.520 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:22.520 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.520 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.520 "name": "raid_bdev1", 00:17:22.520 "uuid": "c0e8d541-12ea-4722-8987-b0bba10cc952", 00:17:22.520 "strip_size_kb": 0, 00:17:22.520 "state": "online", 00:17:22.520 "raid_level": "raid1", 00:17:22.520 "superblock": true, 00:17:22.520 "num_base_bdevs": 2, 00:17:22.520 "num_base_bdevs_discovered": 1, 00:17:22.520 "num_base_bdevs_operational": 1, 00:17:22.520 "base_bdevs_list": [ 00:17:22.520 { 00:17:22.520 "name": null, 00:17:22.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.520 "is_configured": false, 00:17:22.520 "data_offset": 0, 00:17:22.520 "data_size": 7936 00:17:22.520 }, 00:17:22.520 { 00:17:22.520 "name": "BaseBdev2", 00:17:22.520 "uuid": "7f59610c-e66a-557d-bfcd-6575faea8b08", 00:17:22.520 "is_configured": true, 00:17:22.520 "data_offset": 256, 00:17:22.520 "data_size": 7936 00:17:22.520 } 00:17:22.520 ] 00:17:22.520 }' 00:17:22.520 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.520 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:22.780 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:22.780 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.780 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:22.780 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:22.780 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.780 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.780 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.780 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:22.780 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.780 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.780 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.780 "name": "raid_bdev1", 00:17:22.780 "uuid": "c0e8d541-12ea-4722-8987-b0bba10cc952", 00:17:22.780 "strip_size_kb": 0, 00:17:22.780 "state": "online", 00:17:22.780 "raid_level": "raid1", 00:17:22.780 "superblock": true, 00:17:22.780 "num_base_bdevs": 2, 00:17:22.780 "num_base_bdevs_discovered": 1, 00:17:22.780 "num_base_bdevs_operational": 1, 00:17:22.780 "base_bdevs_list": [ 00:17:22.780 { 00:17:22.780 "name": null, 00:17:22.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.780 "is_configured": false, 00:17:22.780 "data_offset": 0, 00:17:22.780 "data_size": 7936 00:17:22.780 }, 00:17:22.780 { 00:17:22.780 "name": "BaseBdev2", 00:17:22.780 "uuid": "7f59610c-e66a-557d-bfcd-6575faea8b08", 00:17:22.780 "is_configured": true, 00:17:22.780 "data_offset": 256, 00:17:22.780 "data_size": 7936 00:17:22.780 } 00:17:22.780 ] 00:17:22.780 }' 00:17:22.780 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.780 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:22.780 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.040 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:23.040 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:23.040 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.040 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:23.040 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.040 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:23.040 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.040 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:23.040 [2024-11-21 04:15:22.810212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:23.040 [2024-11-21 04:15:22.810271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.040 [2024-11-21 04:15:22.810291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:23.040 [2024-11-21 04:15:22.810302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.040 [2024-11-21 04:15:22.810475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.040 [2024-11-21 04:15:22.810492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:23.040 [2024-11-21 04:15:22.810537] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:23.040 [2024-11-21 04:15:22.810555] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:23.040 [2024-11-21 04:15:22.810563] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:23.040 [2024-11-21 04:15:22.810578] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:23.040 BaseBdev1 00:17:23.040 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.040 04:15:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:23.981 04:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:23.981 04:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.981 04:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.981 04:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.981 04:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.981 04:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:23.981 04:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.981 04:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.981 04:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.981 04:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.981 04:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.981 04:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.981 04:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.981 04:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:23.981 04:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.981 04:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.981 "name": "raid_bdev1", 00:17:23.981 "uuid": "c0e8d541-12ea-4722-8987-b0bba10cc952", 00:17:23.981 "strip_size_kb": 0, 00:17:23.981 "state": "online", 00:17:23.981 "raid_level": "raid1", 00:17:23.981 "superblock": true, 00:17:23.981 "num_base_bdevs": 2, 00:17:23.981 "num_base_bdevs_discovered": 1, 00:17:23.981 "num_base_bdevs_operational": 1, 00:17:23.981 "base_bdevs_list": [ 00:17:23.981 { 00:17:23.981 "name": null, 00:17:23.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.981 "is_configured": false, 00:17:23.981 "data_offset": 0, 00:17:23.981 "data_size": 7936 00:17:23.981 }, 00:17:23.981 { 00:17:23.981 "name": "BaseBdev2", 00:17:23.981 "uuid": "7f59610c-e66a-557d-bfcd-6575faea8b08", 00:17:23.981 "is_configured": true, 00:17:23.981 "data_offset": 256, 00:17:23.981 "data_size": 7936 00:17:23.981 } 00:17:23.981 ] 00:17:23.981 }' 00:17:23.981 04:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.981 04:15:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.551 "name": "raid_bdev1", 00:17:24.551 "uuid": "c0e8d541-12ea-4722-8987-b0bba10cc952", 00:17:24.551 "strip_size_kb": 0, 00:17:24.551 "state": "online", 00:17:24.551 "raid_level": "raid1", 00:17:24.551 "superblock": true, 00:17:24.551 "num_base_bdevs": 2, 00:17:24.551 "num_base_bdevs_discovered": 1, 00:17:24.551 "num_base_bdevs_operational": 1, 00:17:24.551 "base_bdevs_list": [ 00:17:24.551 { 00:17:24.551 "name": null, 00:17:24.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.551 "is_configured": false, 00:17:24.551 "data_offset": 0, 00:17:24.551 "data_size": 7936 00:17:24.551 }, 00:17:24.551 { 00:17:24.551 "name": "BaseBdev2", 00:17:24.551 "uuid": "7f59610c-e66a-557d-bfcd-6575faea8b08", 00:17:24.551 "is_configured": true, 00:17:24.551 "data_offset": 256, 00:17:24.551 "data_size": 7936 00:17:24.551 } 00:17:24.551 ] 00:17:24.551 }' 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:24.551 [2024-11-21 04:15:24.423526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.551 [2024-11-21 04:15:24.423662] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:24.551 [2024-11-21 04:15:24.423674] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:24.551 request: 00:17:24.551 { 00:17:24.551 "base_bdev": "BaseBdev1", 00:17:24.551 "raid_bdev": "raid_bdev1", 00:17:24.551 "method": "bdev_raid_add_base_bdev", 00:17:24.551 "req_id": 1 00:17:24.551 } 00:17:24.551 Got JSON-RPC error response 00:17:24.551 response: 00:17:24.551 { 00:17:24.551 "code": -22, 00:17:24.551 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:24.551 } 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:24.551 04:15:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:25.491 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:25.491 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.491 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.491 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.491 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.491 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:25.491 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.491 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.491 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.491 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.491 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.491 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.491 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.491 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:25.491 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.751 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.751 "name": "raid_bdev1", 00:17:25.751 "uuid": "c0e8d541-12ea-4722-8987-b0bba10cc952", 00:17:25.751 "strip_size_kb": 0, 00:17:25.751 "state": "online", 00:17:25.751 "raid_level": "raid1", 00:17:25.751 "superblock": true, 00:17:25.751 "num_base_bdevs": 2, 00:17:25.751 "num_base_bdevs_discovered": 1, 00:17:25.751 "num_base_bdevs_operational": 1, 00:17:25.751 "base_bdevs_list": [ 00:17:25.751 { 00:17:25.751 "name": null, 00:17:25.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.751 "is_configured": false, 00:17:25.751 "data_offset": 0, 00:17:25.751 "data_size": 7936 00:17:25.751 }, 00:17:25.751 { 00:17:25.751 "name": "BaseBdev2", 00:17:25.751 "uuid": "7f59610c-e66a-557d-bfcd-6575faea8b08", 00:17:25.751 "is_configured": true, 00:17:25.751 "data_offset": 256, 00:17:25.751 "data_size": 7936 00:17:25.751 } 00:17:25.751 ] 00:17:25.751 }' 00:17:25.751 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.751 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.010 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:26.010 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.010 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:26.010 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:26.010 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.010 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.010 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.010 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.010 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.010 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.010 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.010 "name": "raid_bdev1", 00:17:26.010 "uuid": "c0e8d541-12ea-4722-8987-b0bba10cc952", 00:17:26.010 "strip_size_kb": 0, 00:17:26.010 "state": "online", 00:17:26.010 "raid_level": "raid1", 00:17:26.010 "superblock": true, 00:17:26.010 "num_base_bdevs": 2, 00:17:26.010 "num_base_bdevs_discovered": 1, 00:17:26.010 "num_base_bdevs_operational": 1, 00:17:26.010 "base_bdevs_list": [ 00:17:26.010 { 00:17:26.010 "name": null, 00:17:26.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.010 "is_configured": false, 00:17:26.010 "data_offset": 0, 00:17:26.010 "data_size": 7936 00:17:26.010 }, 00:17:26.010 { 00:17:26.010 "name": "BaseBdev2", 00:17:26.010 "uuid": "7f59610c-e66a-557d-bfcd-6575faea8b08", 00:17:26.010 "is_configured": true, 00:17:26.010 "data_offset": 256, 00:17:26.010 "data_size": 7936 00:17:26.010 } 00:17:26.010 ] 00:17:26.010 }' 00:17:26.010 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.010 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:26.011 04:15:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.271 04:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:26.271 04:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 99392 00:17:26.271 04:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 99392 ']' 00:17:26.271 04:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 99392 00:17:26.271 04:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:26.271 04:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.271 04:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99392 00:17:26.271 04:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:26.271 04:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:26.271 04:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99392' 00:17:26.271 killing process with pid 99392 00:17:26.271 04:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 99392 00:17:26.271 Received shutdown signal, test time was about 60.000000 seconds 00:17:26.271 00:17:26.271 Latency(us) 00:17:26.271 [2024-11-21T04:15:26.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.271 [2024-11-21T04:15:26.244Z] =================================================================================================================== 00:17:26.271 [2024-11-21T04:15:26.244Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:26.271 [2024-11-21 04:15:26.047696] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:26.271 [2024-11-21 04:15:26.047819] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:26.271 04:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 99392 00:17:26.271 [2024-11-21 04:15:26.047868] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:26.271 [2024-11-21 04:15:26.047877] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:17:26.271 [2024-11-21 04:15:26.110485] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:26.531 04:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:17:26.531 00:17:26.531 real 0m16.395s 00:17:26.531 user 0m21.819s 00:17:26.531 sys 0m1.734s 00:17:26.531 ************************************ 00:17:26.531 END TEST raid_rebuild_test_sb_md_interleaved 00:17:26.531 ************************************ 00:17:26.531 04:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:26.531 04:15:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.531 04:15:26 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:17:26.531 04:15:26 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:17:26.531 04:15:26 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 99392 ']' 00:17:26.531 04:15:26 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 99392 00:17:26.791 04:15:26 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:17:26.791 00:17:26.791 real 10m11.808s 00:17:26.791 user 14m15.061s 00:17:26.791 sys 1m58.960s 00:17:26.791 ************************************ 00:17:26.791 END TEST bdev_raid 00:17:26.791 ************************************ 00:17:26.791 04:15:26 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:26.791 04:15:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:26.791 04:15:26 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:26.791 04:15:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:26.791 04:15:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:26.791 04:15:26 -- common/autotest_common.sh@10 -- # set +x 00:17:26.791 ************************************ 00:17:26.791 START TEST spdkcli_raid 00:17:26.791 ************************************ 00:17:26.791 04:15:26 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:26.791 * Looking for test storage... 00:17:26.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:26.791 04:15:26 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:26.791 04:15:26 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:26.791 04:15:26 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:27.052 04:15:26 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:27.052 04:15:26 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:17:27.052 04:15:26 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:27.052 04:15:26 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:27.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.052 --rc genhtml_branch_coverage=1 00:17:27.052 --rc genhtml_function_coverage=1 00:17:27.052 --rc genhtml_legend=1 00:17:27.052 --rc geninfo_all_blocks=1 00:17:27.052 --rc geninfo_unexecuted_blocks=1 00:17:27.052 00:17:27.052 ' 00:17:27.053 04:15:26 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:27.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.053 --rc genhtml_branch_coverage=1 00:17:27.053 --rc genhtml_function_coverage=1 00:17:27.053 --rc genhtml_legend=1 00:17:27.053 --rc geninfo_all_blocks=1 00:17:27.053 --rc geninfo_unexecuted_blocks=1 00:17:27.053 00:17:27.053 ' 00:17:27.053 04:15:26 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:27.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.053 --rc genhtml_branch_coverage=1 00:17:27.053 --rc genhtml_function_coverage=1 00:17:27.053 --rc genhtml_legend=1 00:17:27.053 --rc geninfo_all_blocks=1 00:17:27.053 --rc geninfo_unexecuted_blocks=1 00:17:27.053 00:17:27.053 ' 00:17:27.053 04:15:26 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:27.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.053 --rc genhtml_branch_coverage=1 00:17:27.053 --rc genhtml_function_coverage=1 00:17:27.053 --rc genhtml_legend=1 00:17:27.053 --rc geninfo_all_blocks=1 00:17:27.053 --rc geninfo_unexecuted_blocks=1 00:17:27.053 00:17:27.053 ' 00:17:27.053 04:15:26 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:27.053 04:15:26 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:27.053 04:15:26 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:27.053 04:15:26 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:17:27.053 04:15:26 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:17:27.053 04:15:26 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:17:27.053 04:15:26 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:17:27.053 04:15:26 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:17:27.053 04:15:26 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:17:27.053 04:15:26 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:17:27.053 04:15:26 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:17:27.053 04:15:26 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:17:27.053 04:15:26 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:17:27.053 04:15:26 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:17:27.053 04:15:26 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:17:27.053 04:15:26 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:17:27.053 04:15:26 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:17:27.053 04:15:26 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:17:27.053 04:15:26 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:17:27.053 04:15:26 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:17:27.053 04:15:26 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:17:27.053 04:15:26 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:17:27.053 04:15:26 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:17:27.053 04:15:26 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:17:27.053 04:15:26 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:17:27.053 04:15:26 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:27.053 04:15:26 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:27.053 04:15:26 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:27.053 04:15:26 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:27.053 04:15:26 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:27.053 04:15:26 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:27.053 04:15:26 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:17:27.053 04:15:26 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:17:27.053 04:15:26 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:27.053 04:15:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:27.053 04:15:26 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:17:27.053 04:15:26 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=100063 00:17:27.053 04:15:26 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:17:27.053 04:15:26 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 100063 00:17:27.053 04:15:26 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 100063 ']' 00:17:27.053 04:15:26 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.053 04:15:26 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.053 04:15:26 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.053 04:15:26 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.053 04:15:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:27.053 [2024-11-21 04:15:26.957449] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:17:27.053 [2024-11-21 04:15:26.957664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100063 ] 00:17:27.313 [2024-11-21 04:15:27.112383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:27.313 [2024-11-21 04:15:27.156098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.313 [2024-11-21 04:15:27.156186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.883 04:15:27 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:27.883 04:15:27 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:17:27.883 04:15:27 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:17:27.883 04:15:27 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:27.883 04:15:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:27.883 04:15:27 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:17:27.883 04:15:27 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:27.883 04:15:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:27.883 04:15:27 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:17:27.883 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:17:27.883 ' 00:17:29.794 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:17:29.794 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:17:29.794 04:15:29 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:17:29.794 04:15:29 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:29.794 04:15:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:29.794 04:15:29 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:17:29.794 04:15:29 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:29.794 04:15:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:29.794 04:15:29 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:17:29.794 ' 00:17:30.735 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:17:30.735 04:15:30 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:17:30.735 04:15:30 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:30.735 04:15:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:30.735 04:15:30 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:17:30.735 04:15:30 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:30.735 04:15:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:30.735 04:15:30 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:17:30.735 04:15:30 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:17:31.345 04:15:31 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:17:31.345 04:15:31 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:17:31.345 04:15:31 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:17:31.345 04:15:31 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:31.345 04:15:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:31.345 04:15:31 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:17:31.345 04:15:31 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:31.345 04:15:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:31.345 04:15:31 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:17:31.345 ' 00:17:32.297 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:17:32.557 04:15:32 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:17:32.557 04:15:32 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:32.557 04:15:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:32.557 04:15:32 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:17:32.557 04:15:32 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:32.557 04:15:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:32.557 04:15:32 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:17:32.557 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:17:32.557 ' 00:17:33.940 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:17:33.940 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:17:33.940 04:15:33 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:17:33.940 04:15:33 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:33.940 04:15:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:34.200 04:15:33 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 100063 00:17:34.200 04:15:33 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 100063 ']' 00:17:34.200 04:15:33 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 100063 00:17:34.200 04:15:33 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:17:34.200 04:15:33 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.200 04:15:33 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100063 00:17:34.200 killing process with pid 100063 00:17:34.200 04:15:33 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:34.200 04:15:33 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:34.200 04:15:33 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100063' 00:17:34.200 04:15:33 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 100063 00:17:34.200 04:15:33 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 100063 00:17:34.770 04:15:34 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:17:34.770 04:15:34 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 100063 ']' 00:17:34.770 04:15:34 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 100063 00:17:34.770 04:15:34 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 100063 ']' 00:17:34.770 04:15:34 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 100063 00:17:34.770 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (100063) - No such process 00:17:34.770 04:15:34 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 100063 is not found' 00:17:34.770 Process with pid 100063 is not found 00:17:34.770 04:15:34 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:17:34.770 04:15:34 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:17:34.770 04:15:34 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:17:34.770 04:15:34 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:17:34.770 ************************************ 00:17:34.770 END TEST spdkcli_raid 00:17:34.770 ************************************ 00:17:34.770 00:17:34.770 real 0m8.011s 00:17:34.770 user 0m16.783s 00:17:34.770 sys 0m1.231s 00:17:34.770 04:15:34 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:34.770 04:15:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:34.770 04:15:34 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:17:34.770 04:15:34 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:34.770 04:15:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:34.770 04:15:34 -- common/autotest_common.sh@10 -- # set +x 00:17:34.770 ************************************ 00:17:34.770 START TEST blockdev_raid5f 00:17:34.770 ************************************ 00:17:34.770 04:15:34 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:17:35.031 * Looking for test storage... 00:17:35.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:35.031 04:15:34 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:35.031 04:15:34 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:17:35.031 04:15:34 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:35.031 04:15:34 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:35.031 04:15:34 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:17:35.031 04:15:34 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:35.031 04:15:34 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:35.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.031 --rc genhtml_branch_coverage=1 00:17:35.031 --rc genhtml_function_coverage=1 00:17:35.031 --rc genhtml_legend=1 00:17:35.031 --rc geninfo_all_blocks=1 00:17:35.031 --rc geninfo_unexecuted_blocks=1 00:17:35.031 00:17:35.031 ' 00:17:35.031 04:15:34 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:35.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.031 --rc genhtml_branch_coverage=1 00:17:35.031 --rc genhtml_function_coverage=1 00:17:35.031 --rc genhtml_legend=1 00:17:35.031 --rc geninfo_all_blocks=1 00:17:35.031 --rc geninfo_unexecuted_blocks=1 00:17:35.031 00:17:35.031 ' 00:17:35.031 04:15:34 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:35.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.031 --rc genhtml_branch_coverage=1 00:17:35.031 --rc genhtml_function_coverage=1 00:17:35.031 --rc genhtml_legend=1 00:17:35.031 --rc geninfo_all_blocks=1 00:17:35.031 --rc geninfo_unexecuted_blocks=1 00:17:35.031 00:17:35.031 ' 00:17:35.031 04:15:34 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:35.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.031 --rc genhtml_branch_coverage=1 00:17:35.031 --rc genhtml_function_coverage=1 00:17:35.031 --rc genhtml_legend=1 00:17:35.031 --rc geninfo_all_blocks=1 00:17:35.031 --rc geninfo_unexecuted_blocks=1 00:17:35.031 00:17:35.031 ' 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=100315 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:35.031 04:15:34 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 100315 00:17:35.031 04:15:34 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 100315 ']' 00:17:35.031 04:15:34 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.031 04:15:34 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:35.031 04:15:34 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.031 04:15:34 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:35.031 04:15:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:35.292 [2024-11-21 04:15:35.037748] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:17:35.292 [2024-11-21 04:15:35.037983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100315 ] 00:17:35.292 [2024-11-21 04:15:35.196589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.292 [2024-11-21 04:15:35.237693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.233 04:15:35 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:36.233 04:15:35 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:17:36.233 04:15:35 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:17:36.233 04:15:35 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:17:36.233 04:15:35 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:17:36.233 04:15:35 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.233 04:15:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:36.233 Malloc0 00:17:36.233 Malloc1 00:17:36.233 Malloc2 00:17:36.233 04:15:35 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.233 04:15:35 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:17:36.233 04:15:35 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.233 04:15:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:36.233 04:15:35 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.233 04:15:35 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:17:36.233 04:15:35 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:17:36.233 04:15:35 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.233 04:15:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:36.233 04:15:35 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.233 04:15:35 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:17:36.233 04:15:35 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.233 04:15:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:36.233 04:15:35 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.233 04:15:35 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:36.233 04:15:35 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.233 04:15:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:36.233 04:15:35 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.233 04:15:36 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:17:36.233 04:15:36 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:17:36.233 04:15:36 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:17:36.234 04:15:36 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.234 04:15:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:36.234 04:15:36 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.234 04:15:36 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:17:36.234 04:15:36 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:17:36.234 04:15:36 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "13444b15-a785-4ffd-bdad-c9274225c969"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "13444b15-a785-4ffd-bdad-c9274225c969",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "13444b15-a785-4ffd-bdad-c9274225c969",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "f01148ce-038d-47df-ab7d-c7c1f0d82da3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "02867f0c-cd75-45e5-b3fc-01a9d0c36e9d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "d6678097-be09-4ec9-b863-37aa79e29741",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:36.234 04:15:36 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:17:36.234 04:15:36 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:17:36.234 04:15:36 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:17:36.234 04:15:36 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 100315 00:17:36.234 04:15:36 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 100315 ']' 00:17:36.234 04:15:36 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 100315 00:17:36.234 04:15:36 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:17:36.234 04:15:36 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.234 04:15:36 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100315 00:17:36.234 killing process with pid 100315 00:17:36.234 04:15:36 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:36.234 04:15:36 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:36.234 04:15:36 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100315' 00:17:36.234 04:15:36 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 100315 00:17:36.234 04:15:36 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 100315 00:17:37.175 04:15:36 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:37.175 04:15:36 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:17:37.175 04:15:36 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:37.175 04:15:36 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.175 04:15:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:37.175 ************************************ 00:17:37.175 START TEST bdev_hello_world 00:17:37.175 ************************************ 00:17:37.175 04:15:36 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:17:37.175 [2024-11-21 04:15:36.911845] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:17:37.175 [2024-11-21 04:15:36.912037] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100360 ] 00:17:37.175 [2024-11-21 04:15:37.068404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.175 [2024-11-21 04:15:37.111958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.435 [2024-11-21 04:15:37.360742] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:37.435 [2024-11-21 04:15:37.360842] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:17:37.435 [2024-11-21 04:15:37.360874] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:37.435 [2024-11-21 04:15:37.361245] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:37.435 [2024-11-21 04:15:37.361458] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:37.435 [2024-11-21 04:15:37.361525] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:37.435 [2024-11-21 04:15:37.361623] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:37.435 00:17:37.435 [2024-11-21 04:15:37.361670] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:38.005 00:17:38.005 real 0m0.901s 00:17:38.005 user 0m0.503s 00:17:38.005 sys 0m0.286s 00:17:38.005 04:15:37 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:38.005 04:15:37 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:38.005 ************************************ 00:17:38.005 END TEST bdev_hello_world 00:17:38.005 ************************************ 00:17:38.005 04:15:37 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:17:38.005 04:15:37 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:38.005 04:15:37 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:38.005 04:15:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:38.005 ************************************ 00:17:38.005 START TEST bdev_bounds 00:17:38.005 ************************************ 00:17:38.005 04:15:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:17:38.005 04:15:37 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=100391 00:17:38.005 04:15:37 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:38.005 04:15:37 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:38.005 Process bdevio pid: 100391 00:17:38.005 04:15:37 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 100391' 00:17:38.005 04:15:37 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 100391 00:17:38.005 04:15:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 100391 ']' 00:17:38.005 04:15:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.005 04:15:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.005 04:15:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.005 04:15:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.005 04:15:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:38.006 [2024-11-21 04:15:37.892371] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:17:38.006 [2024-11-21 04:15:37.892498] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100391 ] 00:17:38.265 [2024-11-21 04:15:38.049196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:38.265 [2024-11-21 04:15:38.095277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.265 [2024-11-21 04:15:38.095311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.265 [2024-11-21 04:15:38.095441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.835 04:15:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.835 04:15:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:17:38.835 04:15:38 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:38.835 I/O targets: 00:17:38.835 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:17:38.835 00:17:38.835 00:17:38.835 CUnit - A unit testing framework for C - Version 2.1-3 00:17:38.835 http://cunit.sourceforge.net/ 00:17:38.835 00:17:38.835 00:17:38.835 Suite: bdevio tests on: raid5f 00:17:38.835 Test: blockdev write read block ...passed 00:17:38.835 Test: blockdev write zeroes read block ...passed 00:17:39.094 Test: blockdev write zeroes read no split ...passed 00:17:39.094 Test: blockdev write zeroes read split ...passed 00:17:39.094 Test: blockdev write zeroes read split partial ...passed 00:17:39.094 Test: blockdev reset ...passed 00:17:39.094 Test: blockdev write read 8 blocks ...passed 00:17:39.094 Test: blockdev write read size > 128k ...passed 00:17:39.094 Test: blockdev write read invalid size ...passed 00:17:39.094 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:39.094 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:39.094 Test: blockdev write read max offset ...passed 00:17:39.094 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:39.094 Test: blockdev writev readv 8 blocks ...passed 00:17:39.094 Test: blockdev writev readv 30 x 1block ...passed 00:17:39.094 Test: blockdev writev readv block ...passed 00:17:39.094 Test: blockdev writev readv size > 128k ...passed 00:17:39.094 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:39.094 Test: blockdev comparev and writev ...passed 00:17:39.094 Test: blockdev nvme passthru rw ...passed 00:17:39.094 Test: blockdev nvme passthru vendor specific ...passed 00:17:39.094 Test: blockdev nvme admin passthru ...passed 00:17:39.094 Test: blockdev copy ...passed 00:17:39.094 00:17:39.094 Run Summary: Type Total Ran Passed Failed Inactive 00:17:39.094 suites 1 1 n/a 0 0 00:17:39.094 tests 23 23 23 0 0 00:17:39.094 asserts 130 130 130 0 n/a 00:17:39.094 00:17:39.094 Elapsed time = 0.367 seconds 00:17:39.094 0 00:17:39.094 04:15:38 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 100391 00:17:39.094 04:15:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 100391 ']' 00:17:39.094 04:15:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 100391 00:17:39.094 04:15:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:17:39.094 04:15:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.094 04:15:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100391 00:17:39.094 killing process with pid 100391 00:17:39.094 04:15:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:39.094 04:15:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:39.094 04:15:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100391' 00:17:39.094 04:15:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 100391 00:17:39.094 04:15:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 100391 00:17:39.662 ************************************ 00:17:39.662 END TEST bdev_bounds 00:17:39.662 ************************************ 00:17:39.662 04:15:39 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:17:39.662 00:17:39.662 real 0m1.594s 00:17:39.662 user 0m3.772s 00:17:39.662 sys 0m0.428s 00:17:39.662 04:15:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:39.662 04:15:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:39.662 04:15:39 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:17:39.662 04:15:39 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:39.662 04:15:39 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:39.662 04:15:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:39.662 ************************************ 00:17:39.662 START TEST bdev_nbd 00:17:39.662 ************************************ 00:17:39.662 04:15:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:17:39.662 04:15:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:17:39.662 04:15:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:17:39.662 04:15:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:39.662 04:15:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:39.662 04:15:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:17:39.662 04:15:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:17:39.662 04:15:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:17:39.662 04:15:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:17:39.662 04:15:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:39.662 04:15:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:17:39.662 04:15:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:17:39.662 04:15:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:17:39.662 04:15:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:17:39.662 04:15:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:17:39.662 04:15:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:17:39.662 04:15:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=100440 00:17:39.662 04:15:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:39.662 04:15:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:39.662 04:15:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 100440 /var/tmp/spdk-nbd.sock 00:17:39.662 04:15:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 100440 ']' 00:17:39.662 04:15:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:39.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:39.662 04:15:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:39.662 04:15:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:39.662 04:15:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:39.662 04:15:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:39.662 [2024-11-21 04:15:39.568105] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:17:39.662 [2024-11-21 04:15:39.568240] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.959 [2024-11-21 04:15:39.726614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.959 [2024-11-21 04:15:39.766225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.528 04:15:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:40.528 04:15:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:17:40.528 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:17:40.528 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:40.528 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:17:40.528 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:40.528 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:17:40.528 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:40.528 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:17:40.528 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:40.528 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:40.529 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:40.529 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:40.529 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:17:40.529 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:17:40.788 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:40.788 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:40.788 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:40.788 04:15:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:40.788 04:15:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:40.788 04:15:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:40.788 04:15:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:40.788 04:15:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:40.788 04:15:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:40.788 04:15:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:40.788 04:15:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:40.788 04:15:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:40.788 1+0 records in 00:17:40.788 1+0 records out 00:17:40.788 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498363 s, 8.2 MB/s 00:17:40.788 04:15:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:40.788 04:15:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:40.788 04:15:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:40.788 04:15:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:40.788 04:15:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:40.788 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:40.788 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:17:40.788 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:41.048 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:41.048 { 00:17:41.048 "nbd_device": "/dev/nbd0", 00:17:41.048 "bdev_name": "raid5f" 00:17:41.048 } 00:17:41.048 ]' 00:17:41.048 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:41.048 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:41.048 { 00:17:41.048 "nbd_device": "/dev/nbd0", 00:17:41.048 "bdev_name": "raid5f" 00:17:41.048 } 00:17:41.048 ]' 00:17:41.048 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:41.048 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:41.048 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:41.048 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:41.048 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:41.048 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:41.048 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:41.048 04:15:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:41.308 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:41.308 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:41.308 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:41.308 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:41.308 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:41.308 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:41.308 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:41.308 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:41.308 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:41.308 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:41.308 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:41.568 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:41.568 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:41.568 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:41.568 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:41.568 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:41.568 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:41.568 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:41.568 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:41.568 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:41.568 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:41.568 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:41.568 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:41.568 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:17:41.568 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:41.568 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:17:41.568 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:41.568 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:17:41.568 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:41.568 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:17:41.568 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:41.569 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:17:41.569 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:41.569 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:41.569 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:41.569 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:41.569 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:41.569 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:41.569 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:17:41.828 /dev/nbd0 00:17:41.828 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:41.828 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:41.828 04:15:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:41.829 04:15:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:41.829 04:15:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:41.829 04:15:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:41.829 04:15:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:41.829 04:15:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:41.829 04:15:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:41.829 04:15:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:41.829 04:15:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:41.829 1+0 records in 00:17:41.829 1+0 records out 00:17:41.829 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395927 s, 10.3 MB/s 00:17:41.829 04:15:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:41.829 04:15:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:41.829 04:15:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:41.829 04:15:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:41.829 04:15:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:41.829 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:41.829 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:41.829 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:41.829 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:41.829 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:42.089 { 00:17:42.089 "nbd_device": "/dev/nbd0", 00:17:42.089 "bdev_name": "raid5f" 00:17:42.089 } 00:17:42.089 ]' 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:42.089 { 00:17:42.089 "nbd_device": "/dev/nbd0", 00:17:42.089 "bdev_name": "raid5f" 00:17:42.089 } 00:17:42.089 ]' 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:42.089 256+0 records in 00:17:42.089 256+0 records out 00:17:42.089 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124786 s, 84.0 MB/s 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:42.089 256+0 records in 00:17:42.089 256+0 records out 00:17:42.089 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289851 s, 36.2 MB/s 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:42.089 04:15:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:42.350 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:42.350 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:42.350 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:42.350 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:42.350 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:42.350 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:42.350 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:42.350 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:42.350 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:42.350 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:42.350 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:42.609 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:42.609 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:42.609 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:42.609 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:42.609 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:42.609 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:42.609 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:42.609 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:42.609 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:42.609 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:42.609 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:42.609 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:42.609 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:42.609 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:42.609 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:17:42.609 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:42.870 malloc_lvol_verify 00:17:42.870 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:42.870 ade24e4b-267d-4923-9e45-b01ae355d98e 00:17:43.131 04:15:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:43.131 c2510198-e66a-4d6f-a6ae-1e2f0d2b7b87 00:17:43.131 04:15:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:43.391 /dev/nbd0 00:17:43.391 04:15:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:17:43.391 04:15:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:17:43.391 04:15:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:17:43.391 04:15:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:17:43.391 04:15:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:17:43.391 mke2fs 1.47.0 (5-Feb-2023) 00:17:43.391 Discarding device blocks: 0/4096 done 00:17:43.391 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:43.391 00:17:43.391 Allocating group tables: 0/1 done 00:17:43.391 Writing inode tables: 0/1 done 00:17:43.391 Creating journal (1024 blocks): done 00:17:43.391 Writing superblocks and filesystem accounting information: 0/1 done 00:17:43.391 00:17:43.391 04:15:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:43.391 04:15:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:43.391 04:15:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:43.391 04:15:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:43.391 04:15:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:43.391 04:15:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:43.391 04:15:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:43.651 04:15:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:43.651 04:15:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:43.651 04:15:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:43.651 04:15:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:43.651 04:15:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:43.651 04:15:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:43.651 04:15:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:43.651 04:15:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:43.651 04:15:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 100440 00:17:43.651 04:15:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 100440 ']' 00:17:43.651 04:15:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 100440 00:17:43.651 04:15:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:17:43.651 04:15:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:43.651 04:15:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100440 00:17:43.651 killing process with pid 100440 00:17:43.651 04:15:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:43.651 04:15:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:43.651 04:15:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100440' 00:17:43.651 04:15:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 100440 00:17:43.651 04:15:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 100440 00:17:43.912 04:15:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:43.912 00:17:43.912 real 0m4.393s 00:17:43.912 user 0m6.241s 00:17:43.912 sys 0m1.307s 00:17:43.912 ************************************ 00:17:43.912 END TEST bdev_nbd 00:17:43.912 ************************************ 00:17:43.912 04:15:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:43.912 04:15:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:44.173 04:15:43 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:17:44.173 04:15:43 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:17:44.173 04:15:43 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:17:44.173 04:15:43 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:17:44.173 04:15:43 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:44.173 04:15:43 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.173 04:15:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:44.173 ************************************ 00:17:44.173 START TEST bdev_fio 00:17:44.173 ************************************ 00:17:44.173 04:15:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:17:44.173 04:15:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:44.173 04:15:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:44.173 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:44.173 04:15:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:44.173 04:15:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:44.173 04:15:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:44.173 04:15:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:44.173 04:15:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:44.173 04:15:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:44.173 04:15:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:17:44.173 04:15:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:17:44.173 04:15:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:44.173 04:15:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:44.173 04:15:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:44.173 04:15:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:17:44.173 04:15:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:44.173 04:15:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:44.173 04:15:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:44.173 04:15:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:17:44.173 04:15:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:17:44.173 04:15:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:17:44.173 04:15:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:17:44.173 04:15:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:44.173 04:15:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:17:44.173 04:15:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:44.173 04:15:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:17:44.173 04:15:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:17:44.173 04:15:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:44.173 04:15:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:44.173 04:15:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:17:44.173 04:15:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.173 04:15:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:44.173 ************************************ 00:17:44.173 START TEST bdev_fio_rw_verify 00:17:44.173 ************************************ 00:17:44.173 04:15:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:44.173 04:15:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:44.173 04:15:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:44.173 04:15:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:44.173 04:15:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:44.173 04:15:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:44.173 04:15:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:17:44.173 04:15:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:44.173 04:15:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:44.173 04:15:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:44.173 04:15:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:17:44.173 04:15:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:44.434 04:15:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:44.434 04:15:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:44.434 04:15:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:17:44.434 04:15:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:44.434 04:15:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:44.434 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:44.434 fio-3.35 00:17:44.434 Starting 1 thread 00:17:56.657 00:17:56.657 job_raid5f: (groupid=0, jobs=1): err= 0: pid=100623: Thu Nov 21 04:15:54 2024 00:17:56.657 read: IOPS=12.4k, BW=48.4MiB/s (50.7MB/s)(484MiB/10001msec) 00:17:56.657 slat (usec): min=17, max=242, avg=18.91, stdev= 3.14 00:17:56.657 clat (usec): min=12, max=944, avg=130.96, stdev=47.34 00:17:56.657 lat (usec): min=31, max=1186, avg=149.87, stdev=48.25 00:17:56.657 clat percentiles (usec): 00:17:56.657 | 50.000th=[ 133], 99.000th=[ 219], 99.900th=[ 388], 99.990th=[ 824], 00:17:56.657 | 99.999th=[ 947] 00:17:56.657 write: IOPS=13.0k, BW=50.7MiB/s (53.2MB/s)(501MiB/9876msec); 0 zone resets 00:17:56.657 slat (usec): min=7, max=336, avg=16.23, stdev= 3.80 00:17:56.657 clat (usec): min=58, max=1315, avg=296.59, stdev=43.48 00:17:56.657 lat (usec): min=73, max=1651, avg=312.81, stdev=44.56 00:17:56.657 clat percentiles (usec): 00:17:56.657 | 50.000th=[ 302], 99.000th=[ 379], 99.900th=[ 652], 99.990th=[ 1205], 00:17:56.657 | 99.999th=[ 1303] 00:17:56.657 bw ( KiB/s): min=48408, max=53288, per=98.82%, avg=51346.95, stdev=1357.75, samples=19 00:17:56.657 iops : min=12102, max=13322, avg=12836.74, stdev=339.44, samples=19 00:17:56.657 lat (usec) : 20=0.01%, 50=0.01%, 100=15.49%, 250=39.80%, 500=44.58% 00:17:56.657 lat (usec) : 750=0.08%, 1000=0.03% 00:17:56.657 lat (msec) : 2=0.02% 00:17:56.657 cpu : usr=98.84%, sys=0.43%, ctx=25, majf=0, minf=13228 00:17:56.657 IO depths : 1=7.6%, 2=19.8%, 4=55.2%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:56.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.657 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.657 issued rwts: total=123874,128290,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:56.657 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:56.657 00:17:56.657 Run status group 0 (all jobs): 00:17:56.657 READ: bw=48.4MiB/s (50.7MB/s), 48.4MiB/s-48.4MiB/s (50.7MB/s-50.7MB/s), io=484MiB (507MB), run=10001-10001msec 00:17:56.657 WRITE: bw=50.7MiB/s (53.2MB/s), 50.7MiB/s-50.7MiB/s (53.2MB/s-53.2MB/s), io=501MiB (525MB), run=9876-9876msec 00:17:56.657 ----------------------------------------------------- 00:17:56.657 Suppressions used: 00:17:56.657 count bytes template 00:17:56.657 1 7 /usr/src/fio/parse.c 00:17:56.657 571 54816 /usr/src/fio/iolog.c 00:17:56.657 1 8 libtcmalloc_minimal.so 00:17:56.657 1 904 libcrypto.so 00:17:56.657 ----------------------------------------------------- 00:17:56.657 00:17:56.657 ************************************ 00:17:56.657 END TEST bdev_fio_rw_verify 00:17:56.657 ************************************ 00:17:56.657 00:17:56.657 real 0m11.372s 00:17:56.657 user 0m11.622s 00:17:56.657 sys 0m0.675s 00:17:56.657 04:15:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:56.657 04:15:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:56.657 04:15:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:56.657 04:15:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:56.658 04:15:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:56.658 04:15:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:56.658 04:15:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:17:56.658 04:15:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:17:56.658 04:15:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:56.658 04:15:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:56.658 04:15:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:56.658 04:15:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:17:56.658 04:15:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:56.658 04:15:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:56.658 04:15:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:56.658 04:15:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:17:56.658 04:15:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:17:56.658 04:15:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:17:56.658 04:15:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "13444b15-a785-4ffd-bdad-c9274225c969"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "13444b15-a785-4ffd-bdad-c9274225c969",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "13444b15-a785-4ffd-bdad-c9274225c969",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "f01148ce-038d-47df-ab7d-c7c1f0d82da3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "02867f0c-cd75-45e5-b3fc-01a9d0c36e9d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "d6678097-be09-4ec9-b863-37aa79e29741",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:56.658 04:15:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:56.658 04:15:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:56.658 04:15:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:56.658 04:15:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:56.658 /home/vagrant/spdk_repo/spdk 00:17:56.658 04:15:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:56.658 04:15:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:56.658 00:17:56.658 real 0m11.663s 00:17:56.658 user 0m11.742s 00:17:56.658 sys 0m0.807s 00:17:56.658 ************************************ 00:17:56.658 END TEST bdev_fio 00:17:56.658 ************************************ 00:17:56.658 04:15:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:56.658 04:15:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:56.658 04:15:55 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:56.658 04:15:55 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:56.658 04:15:55 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:56.658 04:15:55 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:56.658 04:15:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:56.658 ************************************ 00:17:56.658 START TEST bdev_verify 00:17:56.658 ************************************ 00:17:56.658 04:15:55 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:56.658 [2024-11-21 04:15:55.753190] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:17:56.658 [2024-11-21 04:15:55.753315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100781 ] 00:17:56.658 [2024-11-21 04:15:55.886411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:56.658 [2024-11-21 04:15:55.930153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.658 [2024-11-21 04:15:55.930273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.658 Running I/O for 5 seconds... 00:17:58.534 10949.00 IOPS, 42.77 MiB/s [2024-11-21T04:15:59.447Z] 11015.50 IOPS, 43.03 MiB/s [2024-11-21T04:16:00.392Z] 11072.67 IOPS, 43.25 MiB/s [2024-11-21T04:16:01.330Z] 11073.00 IOPS, 43.25 MiB/s [2024-11-21T04:16:01.330Z] 11092.60 IOPS, 43.33 MiB/s 00:18:01.357 Latency(us) 00:18:01.357 [2024-11-21T04:16:01.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.357 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:01.357 Verification LBA range: start 0x0 length 0x2000 00:18:01.357 raid5f : 5.02 6644.26 25.95 0.00 0.00 28939.34 246.83 20948.63 00:18:01.357 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:01.357 Verification LBA range: start 0x2000 length 0x2000 00:18:01.357 raid5f : 5.02 4453.75 17.40 0.00 0.00 43144.53 118.94 31594.65 00:18:01.357 [2024-11-21T04:16:01.330Z] =================================================================================================================== 00:18:01.357 [2024-11-21T04:16:01.330Z] Total : 11098.01 43.35 0.00 0.00 34643.90 118.94 31594.65 00:18:01.616 00:18:01.616 real 0m5.892s 00:18:01.616 user 0m10.950s 00:18:01.616 sys 0m0.290s 00:18:01.616 04:16:01 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:01.616 ************************************ 00:18:01.616 END TEST bdev_verify 00:18:01.616 ************************************ 00:18:01.616 04:16:01 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:18:01.876 04:16:01 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:01.876 04:16:01 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:01.876 04:16:01 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:01.876 04:16:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:01.876 ************************************ 00:18:01.876 START TEST bdev_verify_big_io 00:18:01.876 ************************************ 00:18:01.876 04:16:01 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:01.876 [2024-11-21 04:16:01.715275] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:18:01.876 [2024-11-21 04:16:01.715380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100857 ] 00:18:01.876 [2024-11-21 04:16:01.847281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:02.135 [2024-11-21 04:16:01.890872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.135 [2024-11-21 04:16:01.890966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.395 Running I/O for 5 seconds... 00:18:04.275 633.00 IOPS, 39.56 MiB/s [2024-11-21T04:16:05.630Z] 761.00 IOPS, 47.56 MiB/s [2024-11-21T04:16:06.200Z] 782.00 IOPS, 48.88 MiB/s [2024-11-21T04:16:07.584Z] 793.25 IOPS, 49.58 MiB/s [2024-11-21T04:16:07.584Z] 799.40 IOPS, 49.96 MiB/s 00:18:07.611 Latency(us) 00:18:07.611 [2024-11-21T04:16:07.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.611 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:07.611 Verification LBA range: start 0x0 length 0x200 00:18:07.611 raid5f : 5.23 461.26 28.83 0.00 0.00 6939719.42 214.64 302209.68 00:18:07.611 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:07.611 Verification LBA range: start 0x200 length 0x200 00:18:07.611 raid5f : 5.32 357.65 22.35 0.00 0.00 8835341.05 211.06 379135.78 00:18:07.611 [2024-11-21T04:16:07.584Z] =================================================================================================================== 00:18:07.611 [2024-11-21T04:16:07.584Z] Total : 818.91 51.18 0.00 0.00 7775532.20 211.06 379135.78 00:18:07.871 00:18:07.871 real 0m6.181s 00:18:07.871 user 0m11.535s 00:18:07.871 sys 0m0.284s 00:18:07.871 04:16:07 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:07.871 04:16:07 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.871 ************************************ 00:18:07.871 END TEST bdev_verify_big_io 00:18:07.871 ************************************ 00:18:08.131 04:16:07 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:08.131 04:16:07 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:08.131 04:16:07 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:08.131 04:16:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:08.131 ************************************ 00:18:08.131 START TEST bdev_write_zeroes 00:18:08.131 ************************************ 00:18:08.131 04:16:07 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:08.131 [2024-11-21 04:16:07.960938] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:18:08.131 [2024-11-21 04:16:07.961128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100944 ] 00:18:08.131 [2024-11-21 04:16:08.092524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.391 [2024-11-21 04:16:08.131327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.652 Running I/O for 1 seconds... 00:18:09.592 29847.00 IOPS, 116.59 MiB/s 00:18:09.592 Latency(us) 00:18:09.592 [2024-11-21T04:16:09.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.592 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:09.592 raid5f : 1.01 29808.52 116.44 0.00 0.00 4280.90 1402.30 5866.76 00:18:09.592 [2024-11-21T04:16:09.565Z] =================================================================================================================== 00:18:09.592 [2024-11-21T04:16:09.565Z] Total : 29808.52 116.44 0.00 0.00 4280.90 1402.30 5866.76 00:18:09.875 00:18:09.875 real 0m1.871s 00:18:09.875 user 0m1.489s 00:18:09.875 sys 0m0.268s 00:18:09.875 04:16:09 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:09.875 04:16:09 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:18:09.875 ************************************ 00:18:09.875 END TEST bdev_write_zeroes 00:18:09.875 ************************************ 00:18:09.875 04:16:09 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:09.875 04:16:09 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:09.875 04:16:09 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:09.875 04:16:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:09.875 ************************************ 00:18:09.875 START TEST bdev_json_nonenclosed 00:18:09.875 ************************************ 00:18:09.875 04:16:09 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:10.136 [2024-11-21 04:16:09.901492] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:18:10.136 [2024-11-21 04:16:09.901670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100982 ] 00:18:10.136 [2024-11-21 04:16:10.032994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.136 [2024-11-21 04:16:10.074144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.136 [2024-11-21 04:16:10.074370] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:18:10.136 [2024-11-21 04:16:10.074441] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:10.136 [2024-11-21 04:16:10.074503] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:10.397 00:18:10.397 real 0m0.361s 00:18:10.397 user 0m0.139s 00:18:10.397 sys 0m0.118s 00:18:10.397 04:16:10 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:10.397 04:16:10 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:18:10.397 ************************************ 00:18:10.397 END TEST bdev_json_nonenclosed 00:18:10.397 ************************************ 00:18:10.397 04:16:10 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:10.397 04:16:10 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:10.397 04:16:10 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:10.397 04:16:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:10.397 ************************************ 00:18:10.397 START TEST bdev_json_nonarray 00:18:10.397 ************************************ 00:18:10.397 04:16:10 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:10.397 [2024-11-21 04:16:10.327083] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:18:10.397 [2024-11-21 04:16:10.327261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101013 ] 00:18:10.657 [2024-11-21 04:16:10.457962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.657 [2024-11-21 04:16:10.497923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.657 [2024-11-21 04:16:10.498151] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:18:10.657 [2024-11-21 04:16:10.498212] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:10.657 [2024-11-21 04:16:10.498258] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:10.657 00:18:10.657 real 0m0.353s 00:18:10.658 user 0m0.142s 00:18:10.658 sys 0m0.107s 00:18:10.658 04:16:10 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:10.658 04:16:10 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:10.658 ************************************ 00:18:10.658 END TEST bdev_json_nonarray 00:18:10.658 ************************************ 00:18:10.919 04:16:10 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:18:10.919 04:16:10 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:18:10.919 04:16:10 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:18:10.919 04:16:10 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:18:10.919 04:16:10 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:18:10.919 04:16:10 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:10.919 04:16:10 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:10.919 04:16:10 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:18:10.919 04:16:10 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:18:10.919 04:16:10 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:18:10.919 04:16:10 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:18:10.919 ************************************ 00:18:10.919 END TEST blockdev_raid5f 00:18:10.919 ************************************ 00:18:10.919 00:18:10.919 real 0m35.989s 00:18:10.919 user 0m48.616s 00:18:10.919 sys 0m5.082s 00:18:10.919 04:16:10 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:10.919 04:16:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:10.919 04:16:10 -- spdk/autotest.sh@194 -- # uname -s 00:18:10.919 04:16:10 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:18:10.919 04:16:10 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:18:10.919 04:16:10 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:18:10.919 04:16:10 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:18:10.919 04:16:10 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:10.919 04:16:10 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:10.919 04:16:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:10.919 04:16:10 -- common/autotest_common.sh@10 -- # set +x 00:18:10.919 04:16:10 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:18:10.919 04:16:10 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:18:10.919 04:16:10 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:18:10.919 04:16:10 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:10.920 04:16:10 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:10.920 04:16:10 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:18:10.920 04:16:10 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:18:10.920 04:16:10 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:10.920 04:16:10 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:10.920 04:16:10 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:18:10.920 04:16:10 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:18:10.920 04:16:10 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:18:10.920 04:16:10 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:18:10.920 04:16:10 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:18:10.920 04:16:10 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:18:10.920 04:16:10 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:18:10.920 04:16:10 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:18:10.920 04:16:10 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:18:10.920 04:16:10 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:18:10.920 04:16:10 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:18:10.920 04:16:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:10.920 04:16:10 -- common/autotest_common.sh@10 -- # set +x 00:18:10.920 04:16:10 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:18:10.920 04:16:10 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:18:10.920 04:16:10 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:18:10.920 04:16:10 -- common/autotest_common.sh@10 -- # set +x 00:18:13.463 INFO: APP EXITING 00:18:13.463 INFO: killing all VMs 00:18:13.463 INFO: killing vhost app 00:18:13.463 INFO: EXIT DONE 00:18:13.723 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:13.723 Waiting for block devices as requested 00:18:13.723 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:13.723 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:14.665 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:14.665 Cleaning 00:18:14.665 Removing: /var/run/dpdk/spdk0/config 00:18:14.665 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:18:14.925 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:18:14.925 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:18:14.925 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:18:14.925 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:18:14.925 Removing: /var/run/dpdk/spdk0/hugepage_info 00:18:14.925 Removing: /dev/shm/spdk_tgt_trace.pid68995 00:18:14.925 Removing: /var/run/dpdk/spdk0 00:18:14.925 Removing: /var/run/dpdk/spdk_pid100063 00:18:14.925 Removing: /var/run/dpdk/spdk_pid100315 00:18:14.925 Removing: /var/run/dpdk/spdk_pid100360 00:18:14.925 Removing: /var/run/dpdk/spdk_pid100391 00:18:14.925 Removing: /var/run/dpdk/spdk_pid100612 00:18:14.925 Removing: /var/run/dpdk/spdk_pid100781 00:18:14.925 Removing: /var/run/dpdk/spdk_pid100857 00:18:14.925 Removing: /var/run/dpdk/spdk_pid100944 00:18:14.925 Removing: /var/run/dpdk/spdk_pid100982 00:18:14.925 Removing: /var/run/dpdk/spdk_pid101013 00:18:14.925 Removing: /var/run/dpdk/spdk_pid68821 00:18:14.925 Removing: /var/run/dpdk/spdk_pid68995 00:18:14.925 Removing: /var/run/dpdk/spdk_pid69202 00:18:14.925 Removing: /var/run/dpdk/spdk_pid69290 00:18:14.925 Removing: /var/run/dpdk/spdk_pid69324 00:18:14.925 Removing: /var/run/dpdk/spdk_pid69430 00:18:14.925 Removing: /var/run/dpdk/spdk_pid69448 00:18:14.925 Removing: /var/run/dpdk/spdk_pid69636 00:18:14.925 Removing: /var/run/dpdk/spdk_pid69718 00:18:14.925 Removing: /var/run/dpdk/spdk_pid69805 00:18:14.925 Removing: /var/run/dpdk/spdk_pid69908 00:18:14.925 Removing: /var/run/dpdk/spdk_pid69988 00:18:14.925 Removing: /var/run/dpdk/spdk_pid70033 00:18:14.925 Removing: /var/run/dpdk/spdk_pid70064 00:18:14.925 Removing: /var/run/dpdk/spdk_pid70135 00:18:14.925 Removing: /var/run/dpdk/spdk_pid70252 00:18:14.925 Removing: /var/run/dpdk/spdk_pid70680 00:18:14.925 Removing: /var/run/dpdk/spdk_pid70733 00:18:14.925 Removing: /var/run/dpdk/spdk_pid70785 00:18:14.925 Removing: /var/run/dpdk/spdk_pid70801 00:18:14.925 Removing: /var/run/dpdk/spdk_pid70870 00:18:14.925 Removing: /var/run/dpdk/spdk_pid70886 00:18:14.925 Removing: /var/run/dpdk/spdk_pid70963 00:18:14.925 Removing: /var/run/dpdk/spdk_pid70973 00:18:14.925 Removing: /var/run/dpdk/spdk_pid71026 00:18:14.926 Removing: /var/run/dpdk/spdk_pid71044 00:18:14.926 Removing: /var/run/dpdk/spdk_pid71094 00:18:14.926 Removing: /var/run/dpdk/spdk_pid71106 00:18:14.926 Removing: /var/run/dpdk/spdk_pid71250 00:18:14.926 Removing: /var/run/dpdk/spdk_pid71286 00:18:14.926 Removing: /var/run/dpdk/spdk_pid71370 00:18:14.926 Removing: /var/run/dpdk/spdk_pid72563 00:18:14.926 Removing: /var/run/dpdk/spdk_pid72758 00:18:14.926 Removing: /var/run/dpdk/spdk_pid72893 00:18:14.926 Removing: /var/run/dpdk/spdk_pid73503 00:18:14.926 Removing: /var/run/dpdk/spdk_pid73709 00:18:14.926 Removing: /var/run/dpdk/spdk_pid73838 00:18:15.186 Removing: /var/run/dpdk/spdk_pid74448 00:18:15.187 Removing: /var/run/dpdk/spdk_pid74767 00:18:15.187 Removing: /var/run/dpdk/spdk_pid74902 00:18:15.187 Removing: /var/run/dpdk/spdk_pid76248 00:18:15.187 Removing: /var/run/dpdk/spdk_pid76490 00:18:15.187 Removing: /var/run/dpdk/spdk_pid76625 00:18:15.187 Removing: /var/run/dpdk/spdk_pid77966 00:18:15.187 Removing: /var/run/dpdk/spdk_pid78208 00:18:15.187 Removing: /var/run/dpdk/spdk_pid78338 00:18:15.187 Removing: /var/run/dpdk/spdk_pid79684 00:18:15.187 Removing: /var/run/dpdk/spdk_pid80119 00:18:15.187 Removing: /var/run/dpdk/spdk_pid80248 00:18:15.187 Removing: /var/run/dpdk/spdk_pid81678 00:18:15.187 Removing: /var/run/dpdk/spdk_pid81926 00:18:15.187 Removing: /var/run/dpdk/spdk_pid82061 00:18:15.187 Removing: /var/run/dpdk/spdk_pid83491 00:18:15.187 Removing: /var/run/dpdk/spdk_pid83739 00:18:15.187 Removing: /var/run/dpdk/spdk_pid83879 00:18:15.187 Removing: /var/run/dpdk/spdk_pid85309 00:18:15.187 Removing: /var/run/dpdk/spdk_pid85787 00:18:15.187 Removing: /var/run/dpdk/spdk_pid85920 00:18:15.187 Removing: /var/run/dpdk/spdk_pid86054 00:18:15.187 Removing: /var/run/dpdk/spdk_pid86468 00:18:15.187 Removing: /var/run/dpdk/spdk_pid87184 00:18:15.187 Removing: /var/run/dpdk/spdk_pid87549 00:18:15.187 Removing: /var/run/dpdk/spdk_pid88231 00:18:15.187 Removing: /var/run/dpdk/spdk_pid88658 00:18:15.187 Removing: /var/run/dpdk/spdk_pid89401 00:18:15.187 Removing: /var/run/dpdk/spdk_pid89794 00:18:15.187 Removing: /var/run/dpdk/spdk_pid91707 00:18:15.187 Removing: /var/run/dpdk/spdk_pid92140 00:18:15.187 Removing: /var/run/dpdk/spdk_pid92564 00:18:15.187 Removing: /var/run/dpdk/spdk_pid94602 00:18:15.187 Removing: /var/run/dpdk/spdk_pid95075 00:18:15.187 Removing: /var/run/dpdk/spdk_pid95562 00:18:15.187 Removing: /var/run/dpdk/spdk_pid96605 00:18:15.187 Removing: /var/run/dpdk/spdk_pid96922 00:18:15.187 Removing: /var/run/dpdk/spdk_pid97839 00:18:15.187 Removing: /var/run/dpdk/spdk_pid98156 00:18:15.187 Removing: /var/run/dpdk/spdk_pid99080 00:18:15.187 Removing: /var/run/dpdk/spdk_pid99392 00:18:15.187 Clean 00:18:15.187 04:16:15 -- common/autotest_common.sh@1453 -- # return 0 00:18:15.187 04:16:15 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:18:15.187 04:16:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:15.187 04:16:15 -- common/autotest_common.sh@10 -- # set +x 00:18:15.447 04:16:15 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:18:15.447 04:16:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:15.447 04:16:15 -- common/autotest_common.sh@10 -- # set +x 00:18:15.447 04:16:15 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:15.447 04:16:15 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:18:15.447 04:16:15 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:18:15.447 04:16:15 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:18:15.447 04:16:15 -- spdk/autotest.sh@398 -- # hostname 00:18:15.447 04:16:15 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:18:15.707 geninfo: WARNING: invalid characters removed from testname! 00:18:42.279 04:16:38 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:42.279 04:16:41 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:44.194 04:16:43 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:46.145 04:16:46 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:48.688 04:16:48 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:50.598 04:16:50 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:53.139 04:16:52 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:18:53.139 04:16:52 -- spdk/autorun.sh@1 -- $ timing_finish 00:18:53.139 04:16:52 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:18:53.139 04:16:52 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:18:53.139 04:16:52 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:18:53.139 04:16:52 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:53.139 + [[ -n 6162 ]] 00:18:53.139 + sudo kill 6162 00:18:53.149 [Pipeline] } 00:18:53.166 [Pipeline] // timeout 00:18:53.172 [Pipeline] } 00:18:53.186 [Pipeline] // stage 00:18:53.192 [Pipeline] } 00:18:53.206 [Pipeline] // catchError 00:18:53.217 [Pipeline] stage 00:18:53.219 [Pipeline] { (Stop VM) 00:18:53.232 [Pipeline] sh 00:18:53.517 + vagrant halt 00:18:56.057 ==> default: Halting domain... 00:19:04.206 [Pipeline] sh 00:19:04.490 + vagrant destroy -f 00:19:07.031 ==> default: Removing domain... 00:19:07.045 [Pipeline] sh 00:19:07.332 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:19:07.342 [Pipeline] } 00:19:07.358 [Pipeline] // stage 00:19:07.363 [Pipeline] } 00:19:07.377 [Pipeline] // dir 00:19:07.382 [Pipeline] } 00:19:07.396 [Pipeline] // wrap 00:19:07.403 [Pipeline] } 00:19:07.415 [Pipeline] // catchError 00:19:07.424 [Pipeline] stage 00:19:07.427 [Pipeline] { (Epilogue) 00:19:07.443 [Pipeline] sh 00:19:07.729 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:19:11.984 [Pipeline] catchError 00:19:11.987 [Pipeline] { 00:19:12.000 [Pipeline] sh 00:19:12.285 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:19:12.285 Artifacts sizes are good 00:19:12.295 [Pipeline] } 00:19:12.308 [Pipeline] // catchError 00:19:12.319 [Pipeline] archiveArtifacts 00:19:12.326 Archiving artifacts 00:19:12.423 [Pipeline] cleanWs 00:19:12.435 [WS-CLEANUP] Deleting project workspace... 00:19:12.435 [WS-CLEANUP] Deferred wipeout is used... 00:19:12.442 [WS-CLEANUP] done 00:19:12.444 [Pipeline] } 00:19:12.460 [Pipeline] // stage 00:19:12.465 [Pipeline] } 00:19:12.480 [Pipeline] // node 00:19:12.486 [Pipeline] End of Pipeline 00:19:12.532 Finished: SUCCESS